Generative AI: The Realm of Infinite Imagination
Excerpt from NorthStar client communication (2023)
Author: Tenzing Tashishar, Investment Research Associate
Generative artificial intelligence (AI) has been making headlines and became a major talking point during buy list company earnings calls over the last two quarters. We find it helpful to think about it in the context of general (versus generative)[1] AI, which has been in the works for years and is quite commonly used for commercial as well as informational purposes. AI is an umbrella term for algorithms that allow machines to learn from data and perform tasks that require human intelligence. An example of general AI is the feature that anticipates and completes your sentence when you are composing an email. Generative AI, a subsect of AI, uses algorithms to create various types of content including text, images, videos, audio, and synthetic data in a human-like way. This technology dates to the 1960s, but it wasn’t until late last year that a chatbot by the name of ChatGPT took the internet by storm. Since its release, just eight months ago, we have seen several buy list companies implement generative AI into their offerings. These include Adobe, Alphabet, Microsoft, Salesforce, and Zoom. The speed at which companies have rushed to implement this new technology into its offerings is unprecedented and, while the hype is exciting, the reality of generative AI is that it is not a universal panacea nor is it without serious flaws. We are striving for a balanced approach to these developments—somewhere between doomsday scenarios and technological utopian thinking. Although we see the transformative potential of generative AI, we do have many concerns about its social impact.
We at NorthStar imagine that generative AI may increase efficiency in the workplace through enhanced creativity and variety of idea generation, optimization through real-time data analysis, and productivity by expediting repetitive work. We see generative AI transforming the way we think, act, and communicate with one another. Referred to as “the steam engine of the mind,” generative AI has the potential to level the playing field as access to knowledge and technology of this caliber is now extended to the public.
We are cautious of the role generative AI will play in numerous arenas, such as eliminating jobs, overloading cloud and electric grid infrastructure, and amplifying misinformation as well as enabling discrimination in militarization, war, and surveillance. For example, generative AI can automate detection by analyzing every frame from security cameras and then alerting users of suspicious behavior in real time with particularly disproportionate impact on Black and Brown individuals as well as those subject to oppressive regimes. Furthermore, the weaponization of generative AI could give those that don’t comply with global regulation a battlefield advantage. Generative AI can assist in designing bioweapons that can cause the next pandemic. It could also be used to produce malicious computer code for cyber-attacks. Although OpenAI assured EU officials that “instructions to AI can be adjusted in such a way that it refuses to share for example information on how to create dangerous substances,” researchers have revealed that ChatGPT has a jailbreak feature that allows certain prompts to bypass these types of safety filters.[2]
Regulators are grappling with the future of generative AI. Government officials from the US, EU, and UK all have expressed their concerns over the existential risk of AI. In a meeting they convened, Sam Altman, CEO of Open AI (parent company of ChatGPT), urged the development of robust safety standards for advanced AI systems and stated: “If this technology goes wrong, it can go quite wrong.”[3] Thus far, the European Union is the farthest along with a proposed Artificial Intelligence Act awaiting concurrence between the Council, which represents the 27 EU Member States, and the European Parliament on the content and language of the proposed act.. This will be the first set of AI regulations by a major regulator. Much more global regulation will be needed to mitigate, or at a minimum, disclose risks and danger. We consider it imperative that AI ethicists and data activists are included in these conversations and hope that diverse perspectives will ensure justice, diversity, equity, and inclusion of all communities. We are monitoring developments on regulation and strategizing on opportunities to use shareholder rights to address our concerns.
___________
The forecasts, opinions, and estimates expressed in this report constitute our judgment as of the date of this letter and are subject to change without notice based on market, economic, and other conditions. The assumptions underlying these forecasts concern future events over which we have no control and may turn out to be materially different from actual experience. All data contained in this letter is from sources deemed to be reliable but cannot be guaranteed as to accuracy or completeness.
Links to third party sites are provided for your convenience and do not constitute an endorsement. These sites may not have the same privacy, security or accessibility standards.
FOR INFORMATION PURPOSES ONLY
This information may include a discussion of a number of companies and other financial market and social events. These opinions are current as of the date of this publication but are subject to change. The information provided herein does not provide information reasonably sufficient upon which to base an investment decision and should not be considered a recommendation to purchase or sell any particular security.
Footnotes:
[1] For simplicity, we refer to general artificial intelligence as AI. Generative AI is the most recent innovation.
[2] Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation | Time
[3] OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing
CONTACT US
- 617.522.2635
- 617.522.3165
- 2 Harris Ave, Boston, MA 02130View Location On Map
- P.O. Box 301840, Boston MA 02130