March 29, 2023
Patrick A. Snell, CFA, CAIA
Chief Investment Officer
While artificial intelligence (AI) accounts for a very small portion of the world’s computing today, over the last few months, AI has been the most publicized phenomenon across the broader technology landscape. We think the latest development, generative AI, should significantly accelerate the adoption of AI across industries over the next several years and in turn have a profound impact on the broader economy’s longer-term productivity. Illustrative of the opportunity, the global AI market is expected to reach $1.6T by 2030, up from $120B in 2022 (source: Precedence Research). Last year, 50% of firms tried to use AI in some manner, up from 20% in 2017 (source: McKinsey).
Artificial Intelligence (AI) is a broad branch of computer science that refers to the ability of a machine to independently perform functions that mimic human intelligence (e.g. observing, problem-solving, and analyzing). Through the use of machine learning (ML) algorithms, AI systems find insights from information without being pre-programmed. ML algorithms are developed through iterations to uncover unknown/hidden patterns in data and to constantly refine the weightings of each input. As more data is loaded into the model, the machine gets smarter as it can infer more correlations in the data (highly iterative data processing to achieve high model accuracy). Deep learning (DL), an advanced form of ML, often stacks multiple models into a neural network, resulting in faster/more accurate predictions. Thus, neural network “training” in a data center results in the development of an algorithm. The algorithm is then leveraged by “inferencing”, whereby computers respond to user inquiries/requests by applying the algorithm (i.e., the model). ML models are often trained in public (hyperscale) clouds and inferencing is increasingly taking place within edge devices (e.g., smartphone, sensors, cars) via processors that come with trained AI models.
Over the last several years, explosive growth in data (structured and unstructured data; videos, photos, website traffic, social media, text, mobile devices, geospatial, IoT/sensors, cloud apps, industry data) and increasingly robust computational resources have enabled a rapidly growing trove of sophisticated algorithms to power advanced applications. Those applications have included image recognition (e.g. diagnose medical scans), speech translation, personalized recommendations, data mining, sentiment analysis, pattern matching, and voice-activated virtual assistants (with natural language user interfaces).
Trained on large neural networks, more recently developed foundation AI models perform a variety of tasks rather than a single specialized one. Each model can be modified to automate a wide variety of discrete tasks, making AI projects easier and cheaper for companies. Large language models (LLMs), a type of foundation model, learn the complexity and linkages of language. LLMs utilize deep learning in natural language processing (NLP) and natural language generation (NLG) to accurately predict the next word in texts. Some of the leading foundation models/LLMs have been developed at large tech labs (e.g., OpenAI, DeepMind, Google Brain, Meta Research).
Emergence of Generative AI:
Built from foundation models and enabled via improved natural language processing (NLP) models, generative AI algorithms produce original content (i.e., translated / summarized text, images, music, speech, software code, video, 3D graphics, etc.). A key enabler for generative AI has been transformers, a type of neural network architecture that enables the training of very large models in reasonable amounts of time. They do this by learning context and meaning from the relationships and dependency of data in parallel and at large scale.
In late 2022, Microsoft-backed development lab OpenAI released its AI-powered chatbot, ChatGPT. Powered by the large language model GPT-3, ChatGPT produced predictive responses to human prompts. The reaction by early users and the media has been overwhelming. Earlier this month, a fine-tuned GPT-4 went live and offers impressive advances in predictability/accuracy (excels in tasks requiring advanced reasoning, complex instruction understanding), multi-modality (ability to learn from and respond to text and images), mathematical problem solving, as well as data analysis and visualization. The updated model performed well in standardized tests (SAT, GRE, Bar Exam). In an effort to extend the bot’s functionality, application programming interfaces (a.k.a. plugins) have been granted to other knowledge sources and databases.
Following ChatGPT’s record-breaking user uptake, other major technology players have responded by introducing competing AI chatbots (Google’s Bard, Baidu’s ERNIE). Microsoft has introduced new versions of its Bing search engine and Edge web browser which include access to a chatbot powered by OpenAI’s GPT-4 language technology. In addition, Microsoft’s new 365 Copilot, built on an LLM, is a natural language interface that will soon run across the company’s suite of productivity applications (Word, Excel, PowerPoint, Outlook, Teams, Viva, Power Platform).
While shortcomings exist (factual errors, bias, intellectual property issues), the vast utility possibilities and enhanced productivity implications make generative AI an enduring focal point for tech-oriented venture capitalists and forward-thinking enterprises across all industries. The number of applications leveraging/integrating generative AI is set to explode. Not only are emerging companies introducing applications offering automation and co-creation (e.g., Jasper generates marketing material), major corporations across the globe have a renewed sense of urgency to develop AI strategies in order to automate processes as well as reimagine products, services and related business models. Recognizing the heightened interest by their corporate customers, enterprise software companies are keen on monetizing this demand trend by rapidly embedding generative AI functionality within their applications.
Lastly, AI algorithms customized with domain-specific content (e.g., particle physics, flow dynamics, robotics) are expected to proliferate. For example, in the area of drug discovery, AI tools released by DeepMind and Meta Platforms predict protein structures which can help scientists understand biological function. In turn, biopharma companies could leverage AI to boost the effectiveness of existing drugs and drug candidates as well as help discover molecules that could treat diseases.
In an effort to capitalize on the growth of AI and generative AI in particular, we are focused on identifying attractive investment opportunities in the following areas:
°Enabling technology for AI includes machine learning code, large data lakes (storage repositories that hold a vast amount of raw data), low latency/high reliability network architecture and semiconductors for computational acceleration.
High-speed networking and transmission semiconductors form the backbone of the neural network.
GPUs (graphics processing units) dominate AI model training. GPUs outperform in areas of scalability, computing performance and power efficiency (via parallel computing). Until recently, CPUs (central processing units) powered the majority of inference workloads. However, CPUs are increasingly being replaced by GPUs as CPUs struggle to adequately process video workloads and support deep learning models (use too much power).
High bandwidth memory (HBM) chips are increasingly used rather than traditional DRAM.
°Hyperscalers are AI tech platforms that can be deployed within or delivered by computing platforms managed by the hyperscale cloud players or by enterprises executing high performance computing within their own (or leased) datacenters.
Not only are hyperscale web leaders the most aggressive users of machine learning internally, but they are also focused on making machine learning/AI functionality more accessible to enterprises as a service. Hyperscale players (e.g., Google Compute, Amazon Web Services, Microsoft Azure, Oracle AI, IBM Watson) have developed and released their own AI platforms upon which enterprise customers can build applications. In those settings, developers use predefined models and development tools delivered as a service.
°Enterprise software vendors are those companies that successfully embed generative AI functionality within their software suites.
Mastrapasqua Asset Management, Inc. does business as M Capital Advisors. If you have a question or need further information, please contact:
Edwin Barton, Principal, Chief Portfolio Strategist in Nashville at 615-255-9898, email@example.com
Claude Koontz, CFA, Principal & Portfolio Manager in San Antonio at 210-353-0519, firstname.lastname@example.org
© 2023 Mastrapasqua Asset Management, Inc. All rights reserved.
The information and opinions contained in this report should not be treated as fact or as insight that will produce desired investment results over time. Investment conclusions always bear risk, and that risk may not be reasonable for any particular reader. Obviously the writer, even assuming good intentions, does not know of the reader’s particular financial circumstance and therefore is not able to assess the propriety of whether a named security makes sense as part of a given individual, family, or institutional portfolio. Mastrapasqua Asset Management clients may, from time to time, own some of the companies mentioned. We hold out no duty to give readers of this column advanced notification of when we may change an opinion. Investors should receive investment advice based on an assessment of their own particular investment circumstances and not on the basis of recommendations in this report. Past performance is not indicative of future returns.