By Mark Munger

AI is making headlines with breakthroughs that are met with awe, skepticism, or indifference. The headlines herald new advancements that depending on the author’s viewpoint, are either transcending, existential, or in some cases just ho-hum. These advancements are revolutionizing capabilities, propelling both personal ambitions and corporate objectives. For forward-thinking business leaders, AI integration is non-negotiable. Consumers are reaching new productivity peaks as government policymakers contemplate regulations. The trajectory of AI is steep and inclusive, promising benefits across the spectrum. AI will continue to advance and it has something for everyone.

The advent of new AI technologies has not only ushered in innovation but also triggered a careful examination of their usage, the data they produce, and their broader economic implications. AI is a market in transition, and it is changing how we work, learn, and live.

This narrative will delve into the regulatory landscape, probe the reasons behind its timing, uncover the novel aspects, and address the question: What’s in it for me (WIIFM)? I will also narrate my personal foray into AI development using openly available free resources, underscoring the accessibility of AI creation today.

Regulation

As we embark on dissecting the impact of AI regulation, it’s pertinent to start with the recent governmental directives poised to shape AI’s trajectory. The U.S. Government’s Executive Order 14110 aims to ensure the secure and ethical development of AI, aligning with similar international efforts like the UK’s AISI. This legal framework could either catalyze or curtail AI’s evolution. Within these regulatory confines, AI’s potential risks—from misinformation to job displacement—are scrutinized. While hyperbolic fears liken AI to dystopian narratives, pragmatic risk assessment is key. We must weigh immediate societal gains against unlikely catastrophic scenarios, fostering innovation while mitigating risks.

Government attention has increasingly turned towards Artificial Intelligence (AI) due to its potential to disrupt society—a concern that, while not new, has gained prominence with recent technological advances. AI is already an established technology integral to daily tasks like spell-checking. Though it has evolved, its newfound capabilities and accessibility in natural language processing, (it speaks English), are being utilized by millions, propelling both its benefits and inherent risks into the spotlight. It’s crucial that we not only harness these benefits but also diligently assess and address the accompanying risks to maintain a necessary balance.

The risks of AI range from AI providing users with wrong answers accepted as truth, biased answers that would direct public opinion, productivity gains eliminating existing jobs, and the most fearful is AI becoming a self-aware existential threat to humanity.

The latter reference to an existential threat with comparisons to Sci-Fi Skynet and Terminator makes me pause. The genie escaping from the bottle and wreaking havoc upon mankind is but 1 possible extreme outcome to the right on the bell curve. However, like all risks, it should be reviewed for its risk vs probability and managed accordingly. However, many other immediate benefits to society and outcomes should be considered. The immediate benefits likely outweigh the probability of extermination.

Many other risk statements of “AI” could be replaced with other new technologies such as “The Internet” or “Mobile Phone”. Even the automobile with its technology introduced risks. All new technologies that facilitate or interact directly with people have risks. Without risk, there are no great advancements.

The Executive Order and AI Innovation

AI is the new hot button for government which already is leery of big tech. The White House 111-page EO stated the objective “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI).”

Leading the way and seizing the promise is equal with managing the risks in the statement. Yet the feel of the text and order is driven more by the risks and restrictions. The departmental statements and responses have been better. It seems that there are parts of government that understand AI, its use, and its risks. It needs to filter up more.

The fact sheet summary for the EO characterizes part of the political issue. The EO is pieced together by adjectives or “Loaded Words” such that it reads like it was written by a political operative rather than someone knowledgeable in AI or technology. The full EO text is better though lacks some of the technical background and objective analysis that should be its focus.

Government Regulation of AI

The ability of governments to regulate AI as a new and immature developing technology is more subjective than objective. AI is not a business or product; it is ideas, technology, and a tool that can be created by anyone as demonstrated by the rise of open-source models. The development of AI cannot be regulated any more than math equations can be regulated. Nor should it be. The release and application of AI to the public are where it can be objectively measured. Though I know we will continue to have lively public debates about these objectives.

The government should provide frameworks for use, oversight without disruption, and monitoring. The focus should be on where businesses provide products available to the mass consumer. The U.S. government should provide guidance and frameworks that align with the government’s purpose of ensuring justice, general welfare, and even national defense. And while there can be much debate on the role of government, there is little debate that the ability of government to innovate is the exception more than the rule. Government regulation should not focus on regulating innovation but on the responsible use of AI where the technology is provided to the consumer.

The government has tools to regulate business however it has few productive tools to limit the use and development by students, enthusiasts, and professionals in their own homes and computers. The government should let innovation flourish within industry and in the public domain providing frameworks for the input and the process while regulating the output where objective measures can be implemented.

Reactionary

Are the new reactionary actions by the government necessary?

AI is not new. It’s been around for decades. Yet with generative AI and natural language advancements, there has emerged a fear that it is an existential threat like a Terminator or Matrix movie. One commentator recently called it fear porn. That term may not be well defined though it certainly brings up an image that fits the talk coming from many Chicken Littles in government and media.

There are real AI issues that need government regulation or government intervention. Issues such as directing public opinion as we have seen with social media. There are also issues with displacing workers. These are objective outcomes that are known and can be foreseen. These outcomes should be addressed while letting the technology development continue.

“Democratize the automobile”

There is a precedent for letting industries and technology mature. One such industry was the automobile.

Henry Ford set out in the early 1900s to democratize the automobile making one available to anyone with a modest means. The automobile existed before Ford though much of what we know today with both automobile and general manufacturing was altered by Ford’s drive to put every person behind the wheel. The world was changed overall for the better by the industrial process. It’s left to be pondered if the US government of today was in session back then, would it have allowed the mass production of automobiles given the injuries to workers in the plant, the environmental impact of the automotive, and the risk to the safety of drivers, passengers, and others?

Democratize AI

Artificial intelligence has been the subject of human imagination for centuries. It has been part of computer technology since the 1950s with the Turing Test to measure AI progress. It has been used in computer games, medical diagnosis, financial trading, and military applications for years.

The recent advancement of transformer technology can be attributed to much of the recent progress. This is not some fictional race of extraterrestrial robots but a way to optimize the neural networks that AI is built on that allow exponential advancement in the training of AIs. Transformers allow AIs to learn relationships across large inputs with a new encoding method. Think of this as paragraphs and pages rather than just a single letter or word. This method also made parallel processing possible which enabled the use of new GPU technology.

A research paper “Attention is All You Need” by Vaswani et al. in 2017 is what promoted the development and acceleration of transformers. Transformer is the “T” of products like ChatGPT. The full name is Chat Generative Pre-trained Transformer. Transformers expand exponentially the depth and breadth of information that AIs can be trained on. This is part of what is called Deep Learning.

The generative process also increased and made the results adaptive. Generative AI uses algorithms that learn from existing data and generate new data based on that learning, whereas traditional AI focuses on pattern recognition and decision-making. This enables Generative AI to produce results that are new, creative, and unique, while traditional AI is usually more rigid and rule-based.

Unsupervised learning has improved with new technology as well. One of the more interesting is Generative Adversarial Networks (GANs). GANs consist of two neural networks that are trained simultaneously with a generator and a discriminator. The discriminator is the critic reviewing the output of the generator assisting it in generating authentic data. This both improves and speeds up the training process.

Let’s Build One!

Is AI truly something that anyone can create and use? Is it an idea and technology that can be developed and used by anyone? I needed to verify this part of my thesis.

Over Thanksgiving break, I decided to test several papers and videos I gathered on building and training a language model. For someone who doesn’t write code daily and doesn’t have the fastest equipment and GPUs, I would rate my experiment a success.

I was able to use publicly available, open-source software and create a very basic model. It was trained on only English letters and numbers at a character level. At the end of the weekend, I was able to produce English-like responses that made sense to me though to most it would have looked more like a child’s gibberish. Though that is exactly what it was. Early learning with an incomplete input set. I used basic technologies though it gave me a much better understanding of the information needed to train, the training process, and what the output initially looks like. And a great respect for the engineers at OpenAI and other AI companies.

This proved that no US government regulation would have prevented me from developing this. With an open and free society, it should be assumed that this is being used for good. It should be monitored and where it scales to public availability be regulated. But it should be left to develop and continue the innovations we have recently seen.

New Product – New Attention

The emergence of AI products like ChatGPT, Claude, Pi, Google BARD, and Microsoft BING, widely accessible, has catalyzed government scrutiny. These tools, pre-trained on vast datasets provided by their developers, have sparked debates on copyright, bias, and the integrity of their training materials. It’s not just about ‘garbage in, garbage out,’ but the subtler, more insidious ‘bias in, bias out’ that demands vigilance. Despite these concerns, the economic boost from AI cannot be ignored. Adherence to existing laws on privacy and intellectual property, coupled with careful monitoring, may offer a balanced approach to nurturing AI’s growth while safeguarding public interests.

Frameworks

The directive of the UK AISI is “to keep people safe in the face of fast and unpredictable progress in AI”. AI is a tool. Much like the automobile, it can accelerate progress and cause harm. It will increase the masses’ productivity while eliminating the need for the output of some of those masses. The UK AISI is not a regulatory body and is more like the USA National Institute of Standards (NIST). NIST has the respect of the technical community offering frameworks for cybersecurity and many of the standards in use today. This approach should be considered as it keeps the fast pace while providing the guidelines. Guidelines that can be more objectively developed and used in regulation and enforcement.

AI is rapidly growing which is good for society. If it were predictable, it would already exist. The idea of letting it continue to be fast and unpredictable is a good plan while having government review the products developed and provide the public with information and oversight to keep the public safe.

Summary

In the evolving landscape of AI, markets in transition signify both opportunity and upheaval. While government oversight is necessary to navigate these changes, overbearing control akin to a “Bureau of Economic Planning and National Resources” is not. Oversight should facilitate safety and ethics at the intersection of AI and the public, not hinder innovation with unwarranted bureaucracy. The balance we seek is one of structured guidance that nurtures progress, ensuring AI’s benefits are realized responsibly and equitably across society.