A rapidly growing number of businesses are adopting Artiﬁcial Intelligence (AI) to reduce their operational cost, improve customer experience, and/or generate new sources of revenue. According to Gartner’s 2019 CIO Survey, the number of enterprises implementing AI solutions has grown 270% in the past four years and tripled in the past year.
This signiﬁcant rate of growth is due to the high potential for AI to create significant business value. It is estimated that AI augmentation alone will create $2.9 trillion of business value and 6.2 billion hours of worker productivity globally by 2021.
However, successfully adopting AI brings about a host of business and technical challenges including but not limited to:
— Deﬁning an AI strategy that can deliver a demonstrable positive business outcome, which use cases will yield the highest level of return on the investment;
— Ensuring explainable AI, the ability to provide transparency on AI-driven decision-making;
— Addressing ethical issues related to AI including how critical decisions are made based on insights provided by complex algorithms;
— Adopting a new organizational culture that sees AI not as a threat but as a tool to augment human thinking and making better and faster decisions; and,
— Ensuring legal and compliance risks are being properly addressed.
— Coupling AI-driven solutions to core decision support and transactional systems, and connecting advanced AI systems to traditional applications and technical infrastructure;
— Ingesting structured and unstructured, internally and externally generated data that is spread across multiple silos;
— Ensuring validity of data that will be exposed to AI-driven solutions.
Have the right business focus AI projects fail or succeed to the extent that specific use-cases are identified that have the potential to demonstrate meaningful business value. Deﬁning the business value that AI and machine learning (ML) solutions can deliver around a specific use-case is paramount. Don't pursue the technology because it appears attractive.
While AI promises considerable economic benefits, gaining broad traction with business stakeholders is possible only if specific business outcomes can be identified upfront. Innovation in deep learning that leverages artificial neural networks continue to advance. But, using the technology in areas such as facial recognition and conversational AI have become table stakes.
For example, in ﬁnancial services, more sophisticated risk analysis, anti-money laundering, advanced claims management, credit-worthiness evaluation, and intelligent customer onboarding have become prime focus areas. In manufacturing, predictive supply-chain management, predictive maintenance, and Smart demand forecasting is where most of the investment is going.
Robotics and IoT in the plant also are table stakes now. And in retail, predictive inventory planning, recommendation engines, and hyper-personalized customer engagement have become critical competitive opportunities. All industries have specific use-cases where AI is transforming the business. But the best opportunities are those where AI can drive significant business disruption.
Aside from having the right business focus, upskilling the workforce to work with AI is equally critical to implementing modern AI-based solutions. The ability to democratize AI transformation across the enterprise by adopting tools and capabilities to enable business users to quickly test algorithms also will be crucial to gaining traction.
The right foundation for AI
Aside from identifying the right business use-case to leverage AI, increasingly companies are faced with “having the right technical infrastructure to support modern AI applications.” The integration of traditional software and data environments with modern ML and deep-learning applications is proving to be a formidable challenge.
The information harvesting layer of a modern technology architecture is where data is efficiently scanned and cataloged. Market-leading services and solutions can be leveraged to connect with any data source inside or outside the enterprise. The services within this architecture layer automatically extract, unify, and organize information, leveraging semantic technologies that enable ingestion of this data into the knowledge fabric. This is one of the most challenging aspects of the architecture and requires the adoption of advanced and modern techniques, such as scalable machine learning and natural language processing (NLP).
The knowledge fabric layer is where enterprise data is converted to knowledge. The most common and efficient way of representing an enterprise’s knowledge domain and artifacts, that is understood by both humans and machines, are enterprise knowledge graphs (EKG). EKG is a perfect way to relate your structured and unstructured information and discovering facts about your organization.
With a proper knowledge infrastructure, you can seamlessly combine highly scalable graph database technologies with complementary storage and search systems to deliver actionable insights. This empowers humans and teams to focus on data analysis, rather than data collection.
This layer also can feature a proprietary set of knowledge accelerators that include domain-speciﬁc ontologies across industry verticals and business methods.
The preceding layers were preparing data to support AI algorithms without being concerned about data collection. This layer is where AI models and algorithms can be embedded into the very core of the architecture to create valuable insights with the potential to augment human thinking across disciplines and innovate operations, processes, products, and more.
The enterprise AI layer takes advantage of highly scalable ML frameworks for both “training” and “deployment” phases. In the training phase, historical data is used to train and evaluate machine-learning models. The selected models will be deployed in the same layer, to be consumed by human or other applications in the enterprise. Multiple models can be deployed simultaneously to serve different enterprise use cases. One architecture serves all.
Human and machine consumption
Ultimately, derived knowledge has to be consumed by humans or machines in an intuitive manner. The Human & Machine consumption layer provides easy-to-use interfaces across the web, mobile, and API services to enable access data to the knowledge fabric layer.
Workﬂow orchestration and security All previously identified processes need to be managed, i.e. scheduling and monitoring, using a workﬂow orchestration tool. A workﬂow orchestration tool allows the enterprise to deﬁne the entire data pipeline. The data pipeline may include data harvesting in batch or real-time (streaming), training and evaluation tasks, monitoring model performances, applying AI models in batch and feeding the result back to the knowledge fabric. Alternatively, the data is fed into other enterprise applications.
Needless to mention, security also is an important aspect of the architecture foundation. Controlling access to resources and the data pipeline is a major requirement.
AI solutions require high-quality data that is standardized and aggregated across the enterprise. The power of AI systems to work on complex problem solving on a 24-by-7 basis means that the enterprise technical architecture must deliver a continuous ﬂow of data upon which “smart” decisions can be made. This means continuous harvesting of data in multiple formats both within and outside of an organization’s traditional boundaries. Not having access to the right data creates the risk that complex AI algorithms will use outdated or incorrect data.
Businesses that will successfully leverage the capabilities of AI, aside from ensuring the right focus around specific use-cases that can show positive results also must spend time deﬁning the underlying technical architecture to enable these new generation of solutions.
Anthony DeLima is the head of Digital Transformation and Global CTO, and Sayyed Nezhadi is the chief technology architect, for NEORIS USA — a "digital accelerator" that provides tech consulting to businesses worldwide.