Haley A.I.-as-a-Service incorporates numerous machine learning algorithms and technologies to drive intelligent interactions. There is no one-size-fits-all algorithm for artificial intelligence, and we strive to have a large toolbox to allow the right tool for the right job. Below is a quick overview of some of the components we incorporate.
The Vital Development Kit (VDK) provides the foundational knowledge representation tool VitalSigns which compiles knowledge models into code, keeping knowledge consistent across Haley.
The Vital Development Kit (VDK) and VitalSigns uses the W3C Standard OWL to represent knowledge objects, their properties, and their relationships.
Within Haley, we've defined "Entities" and "Relationships" as dynamically defined knowledge objects that can be used within the service. An "entity" is a "thing", and a "relationship" is a relationship between things. As an example, an eCommerce store that sells shoes may want to define entities for "Shoes", "Brands", and "Category", with relationships for "makes" ("Nike" makes "Air Jordan") and "belongs-to" ("Air Jordan" belongs-to "BasketBallShoes"). Haley can then use these entities and relationships to recommend items to users.
PyTorch is a machine learning framework based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is free and open-source software.
TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. TensorFlow was developed by the Google Brain team for internal Google use in research and production. The initial version was released under the Apache License 2.0
Ludwig is a declarative machine learning framework that makes it easy to define machine learning pipelines using a simple and flexible data-driven configuration system. Ludwig is suitable for a wide variety of AI tasks, and is hosted by the Linux Foundation AI & Data.
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as prompt, it will produce text that continues the prompt. The architecture is a standard transformer network (with a few engineering tweaks) with the unprecedented size of 2048-token-long context and 175 billion parameters (requiring 800 GB of storage). The training method is "generative pretraining", meaning that it is trained to predict what the next token is. The model demonstrated strong few-shot learning on many text-based tasks.
Haley utilizes reasoning engines to determine what actions to perform or what items to recommend. Reasoning includes rules which may be used to encode domain and business logic. Planning is used to determine which steps are necessary to reach the desired goal.
Reasoning engines tend to be binary true or false, but Bayesian reasoning can be used to determine the likelihood of outcomes - such as the likelihood of a recommendation leading to a purchase.
Haley can utilize external sources of knowledge such as data provided by the customer (such as an inventory system) or from external providers. WikiData provides a structured set of data derived from Wikipedia, and Google provides the KnowledgeGraph service, both of which can provide a depth of knowledge to Haley for many applications.
Haley can utilize externally provided A.I. services from providers such as Amazon, Google, OpenAI, and HuggingFace. Rather than re-implement the wheel, it's often quicker to utilize a service off-the-shelf as a starting point, and train a customized system once the value is proven.