While AI/ML attempts to mimic components of human intelligence, it is still reliant on human intelligence and, like humans, it is far from fallible. Its performance can be influenced by the quality and range of data it is trained on, and the design of the algorithms it uses to make decisions.
Often, the algorithms used in AI models are kept secret or not readily understood, making it difficult to check the results they return or probe potential bias.
At Havos.ai, we believe in transparency and reducing algorithmic bias as much as possible, so we operate on an open, rather than closed ‘black box’ basis.
We use rule-based, transformer-based models to create structure from unstructured materials. We refine this knowledge to answer more specific questions, through advanced content filtering informed by our subject matter experts.
But we also recognize that there is not just one set of answers, so we don’t “throw away” any search or filtering parameters – our data can be re-interrogated or configured to address an endless number of present or future queries, and all of our algorithms are fully accessible.
Comments