Artificial general intelligence (AGI) is a hypothetical level of AI that has superior intelligence to a human being in a general context. While some believe that AGI is still many years away, Bob Muglia expects to see it with us by 2030. Here, he expands on the theme.

It’s a pleasure to speak with you, Mr Muglia! Can you briefly explain the concept of the “Arc of Data Innovation” as introduced in your book The Datapreneurs, and its relevance to the advancements in artificial general intelligence (AGI)?

I trace the Arc of Data Innovation back to the mid-twentieth century. Computer scientists developed the earliest digital computing techniques in the 1940s, using raw computational power to break encrypted military communications codes, but in the 50s, they began to harness computing capabilities to manage and analyse all sorts of data. Since then, computing advances have proceeded on parallel tracks, with computation on one track and data on the other. They are the yin and yang of information technology.

In The Datapreneurs, I use a simple graphic to represent the Arc of Data Innovation. It’s a swoopy line charting innovation breakthroughs in time sequence. The arc begins on the lower left with the emergence of structured data and the relational database, proceeds through the internet revolution and the rise of cloud computing, and then soars upward into an era where computer intelligence will exceed human intelligence in many ways. The reader can see clearly how innovation has accelerated in recent years.

I see artificial general intelligence (AGI) arriving by 2030. That’s when computers will be smarter than the average human.

I don’t want to create the false impression that major advances in data technology come at a steady pace, though. Often innovations come in a herky-jerky style. For instance, computer scientists in universities established the field of artificial intelligence in the 1950s, but mainstream innovation in AI didn’t really take off until the early 2000s, when IBM introduced its Watson application on the TV show Jeopardy! and futurist Ray Kurzweil introduced the concept of a technological singularity, foreseeing a time when machine intelligence would surpass human intelligence. Just now, we’re seeing the explosion of large language models, such as OpenAI’s GPT-4. As a result, we’re at an inflection point that has the potential to rapidly transform business and society.

Looking forward, I see artificial general intelligence (AGI) arriving by 2030. That’s when computers will be smarter than the average human. I think the 2030s will be the beginning of the robotics era, where a wide variety of autonomous robots will surround us. Superintelligence, or computing systems that are more capable than all humans combined, could arrive by the 2040s. A technological singularity, which is a rapid increase in technological progress, may come a little later – perhaps by 2050. Around then, it seems likely, we will entrust computing systems to run many of the data analytics and decision-making processes of business and society with little human supervision – but, hopefully, with plenty of human oversight. Of course, the pace of innovation is difficult to predict precisely.

Many experts have varying opinions on the timeline for achieving AGI. You mentioned that we might see the emergence of AGI within the next decade or sooner. What factors contribute to your belief in this timeline?

The main thing is focus. A lot of human brain power is at work right now advancing AI technology and figuring out how to use it to improve business, the economy, and society. Computer scientists in universities and their smartest graduate students are pursuing breakthroughs in AI. Entrepreneurs in Silicon Valley and elsewhere are rapidly launching AI companies. It’s the internet gold rush all over again. Meanwhile, within companies large and small, business and technology leaders realise that harnessing AI is likely to be crucial in achieving new levels of efficiency, developing new successful products and services, and improving market awareness and outreach. When you have so many people and so much money focusing on one thing, you can progress very quickly.

Apart from AGI, what other major advancements or breakthroughs do you foresee occurring within the Arc of Data Innovation? What impact might they have?

These rapid advances in artificial intelligence aren’t happening in a vacuum. One of the reasons we’re seeing such tremendous progress is that scientists are building most of the new AI technologies on top of the foundation of cloud computing. The cloud makes it easy to manage and analyse large amounts of data, to combine different data types, and to share data both internally and with business partners. We call the cloud technology ecosystem the modern data stack. A handful of companies provide critical components of the stack, including Snowflake, Databricks, and Fivetran. Others, including Microsoft, Google, and Amazon, offer customers the whole thing – from public cloud infrastructure on up. I expect to see significant advances in modern data stack technology, including major improvements in data governance. This is important for progress in AI because, unless we can control what data is available, how it can be used, and who can access it, we won’t be able to take full advantage of all of the new AI capabilities.

Another critical area is knowledge graphs. A lot of people are familiar with how tech companies such as Google use knowledge graphs to present relevant information to computer users. But there’s something else going on, as well. RelationalAI, a company I have invested in, is creating a new kind of application development platform designed specifically for building data-centric applications. The core technology component here is a relational knowledge graph, which allows organisations to fully model business processes within a database. This approach enables them to build applications faster and to make their models more accurate. In some cases, that helps automate business processes; in others, it helps people make better decisions.

When you combine the modern data stack with relational knowledge graphs and AI, you get a new computing paradigm custom-made for handling twenty-first-century challenges and opportunities.

When you have so many people and so much money focusing on one thing, you can progress very quickly.

The other thing I want to mention is quantum computing. I made the point earlier about computation and data being the yin and yang of information technology. Microprocessors have been a key element of progress in computing. For decades, we have been able to pack ever more transistors on a chip, rapidly increasing computing power. But, in recent years, because of the laws of physics, progress has slowed. Along comes quantum computing, with an entirely new physics, which promises to provide computational power beyond our wildest dreams. The combination of quantum computing with AI and the other data technologies will continue to increase the pace of progress, potentially resulting in a technological singularity, where progress proceeds at an unimaginable pace.

How has the work of Isaac Asimov inspired and influenced your perspectives on AI?

I have loved Isaac Asimov’s writings since I was a teenager. I devoured his science fiction novels and stories like popcorn. Even in the early days of digital computing, he had a clear-eyed vision of the future, and many of the changes he envisioned have come to pass. I consider him to be a prophet. Today, with these advances we are seeing in AI, Asimov is even more relevant than before.

Very early on, Asimov saw that machine intelligence would become not just a boon for humankind but would also pose moral and ethical challenges. His focus was on robots, but the ideas he wrote about apply to all AI systems. He believed that humanity would need to create laws to control intelligent machines, and he proposed a set of them, which he called the Laws of Robotics.

I think it’s important to spell them out:

  • Zeroth Law: A robot may not harm humanity or, by inaction, allow humanity to come to harm.
  • First Law: A robot may not injure a human being or, through inaction, allow a human to come to harm.
  • Second Law: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I view Asimov’s Laws as a solid foundation upon which society can build sensible policies governing AI.

In your opinion, what steps can corporations take to ensure that they meet their ethical and moral responsibilities when incorporating AI into their operations? How can governments play a role in regulating AI practices?

Since ChatGPT and other applications built on large language models emerged starting late last year, tech companies have been assessing how to best develop and deploy these technologies. Many large corporations are doing the same kind of assessment. All of these activities fall under the rubric of “Responsible AI”. Take a look at the websites of Microsoft, Google, and OpenAI, and you can read their policies on what they will and won’t do with AI. I believe that every organisation should think deeply about these issues and establish clear principles and, when it’s appropriate, establish rules. I think the same is true of governments.

That said, I don’t believe governments should rush into regulating AI. We don’t want to hamstring innovation. And we don’t want to politicise technology. I agree with other tech leaders who urge governments to establish AI-monitoring agencies whose job is to understand the technologies, observe how they are used, spot areas of concern, and propose safeguards.

I know that the European Union and China have already begun regulating AI. The United States has been more cautious. Just a few weeks ago, at the urging of the Biden administration, seven large tech companies announced an agreement to adopt voluntary safeguards on how they would develop AI technologies. The guidelines include testing to prevent security risks, sharing best practices, transparency about how their systems work, mechanisms to gain public trust, and a commitment to use the technologies to take on some of society’s greatest challenges, including cancer and climate change. This is all good.

The Biden administration has signalled that it will issue executive orders regarding AI and will work with Congress to guide legislation. As I said before, I hope they don’t rush into this. They also pledged to work with other governments worldwide to agree on basic principles governing the use of AI. I applaud that idea. We need something like a multi-national nuclear non-proliferation treaty to head off dangerous uses of AI by governments, particularly uses that endanger their citizens or that could result in war.

Which industries do you predict will experience the most significant impact from the continuous advancement of AI technology?

As with the internet, AI will eventually touch every aspect of business, the economy, and human activity. I think two of the domains that could benefit most from AI are healthcare and environmental sustainability – dealing with climate change. These are areas of almost unfathomable complexity. We need lots of help from smart machines. AI can help us understand much better how the human body and our natural environment work so we can do a better job of improving health and safeguarding the planet.

Generative AI programs promise to improve diagnoses of diseases by providing physicians with evidence and options gathered in real time from deep wells of information.

AI has the potential to transform how healthcare is delivered. A number of countries in the developed world have large and effective healthcare systems, in the main, but the ageing of their populations means that the cost of healthcare, already high, is bound to soar. New AI technologies have the potential to make healthcare systems more efficient and to improve the quality of care. For instance, applications built on top of large language models are already being used to help physicians automatically convert their conversations with patients into formal notes – and to spit out jargon-free summaries to patients after their visits. Generative AI programs promise to improve diagnoses of diseases by providing physicians with evidence and options gathered in real time from deep wells of information.

Scientists and industry leaders say that many of the technologies we need to address climate change already exist or are on the drawing board. But getting new technologies deployed quickly where they are needed is a challenge. Too often, people in places facing particular problems may not have the knowledge and technologies that could help them devise solutions. This is a job for AI-powered research and collaboration platforms.

Already, a number of such platforms exist or are under development that are aimed at helping people who have problems by connecting them with people, companies, technologies, and financial resources that could provide the solutions they seek. One example is Sustain Chain, an initiative launched a couple of years ago in cooperation with the United Nations. It’s essentially a match-making site for people and organisations that aim to solve supply-chain sustainability problems. AI, and large language models in particular, will be the key to turning the potential of these platforms into practical realities.

I urge tech entrepreneurs to explore opportunities in domains where they have expertise. For the first time, with AI, we can build intelligence into applications. This makes it possible for domain experts to encapsulate, or “bottle”, their industry knowledge into products. That is an incredible advancement, and it will enable thousands of enterprising entrepreneurs to build large and successful companies that have a meaningful, positive impact on humanity and the planet.

What advice would you like to impart to aspiring “datapreneurs” who aim to make an impact in the data-driven industry?

One of the core themes of The Datapreneurs is the importance of values for organisations, large and small. In my years at Microsoft and, since then, at Snowflake and through my involvement in startups, I have seen again and again how important establishing a strong, clear, and well-understood set of values is to the long-term success of a business. At the same time, I believe that we should never separate technology from values and ethics. As I write in the book, creating and selling products is the “what” of doing business; values are the “how”.

Values are even more important as the Arc of Data Innovation gives smart machines ever more critical roles in business and society. Machines will be our partners in shaping the future. They must share our values at a fundamental level. For that reason, we must literally program our values into AI-based applications and models. Embedding values in software will provide guard rails that can help ensure that our smart machines work with us and for us, and not against us.

Executive Profile

Bob MugliaBob Muglia is a data technology investor and business executive, former CEO of Snowflake and past president of Microsoft’s Server and Tools Division. He serves as a board director for emerging companies which seek to maximise the power of data to help solve some of the world’s most challenging problems.

Leave a Reply

Your email address will not be published. Required fields are marked *