New Delhi: The onslaught of artificial intelligence has polarized many, but innovators, inventors and investors all agree that its impact will be felt in all facets of life. More importantly, it is critical to define both the benefits and risks of AI.
In a dissertation on the philosophies of AI in its current form, Vinod Khosla, co-founder of Sun Microsystems and founder of Khosla Ventures, argues that global AI innovation and investment will create a clear divide between dystopian and utopian approaches. The philosophy pursued by innovators will in turn determine how investments in AI will be made in the coming decades.
An iconic Silicon Valley entrepreneur, Khosla was an early investor in Sam Altman's OpenAI, investing $50 million in the venture three years before ChatGPT's public launch. The fund also participated in OpenAI's recent $6.6 billion round earlier this year. Khosla also invested in Sarvam, India's generative AI startup.
Industry experts say that establishing a position on the AI argument will be crucial to how humanity innovates on the technology, the guardrails implemented within it and the eventual shape it will take in the long term contract.
Elaborating on the utopian position, Khosla said: “Imagine a post-scarcity economy, where technology eliminates material constraints… and scarcity becomes obsolete – most jobs (will disappear). Yet we would still have enough abundance to pay citizens through some redistributive effort to provide them with a minimum standard of living significantly higher than the current minimum.”
But at the other end of the spectrum, Khosla argued that the risks associated with AI innovation are “real, but manageable.”
'Bad conscious AI'
“In the current debate, the doomers are focusing on the small 'evil sentient AI' risk, and not on the most obvious one: losing the AI race against nefarious nation states. This makes AI dangerous for the West. Ironically, those who fear AI and its ability to erode democracy and manipulate societies should fear this risk the most. That is why we cannot lose to China, and why we must go a step further and use AI for the benefit of all humanity,” he said.
Industry experts and advisors emphasize that this debate is necessary – an argument that should determine the direction of our innovation.
Jayanth Kolla, co-founder of technology consultancy Convergence Catalyst, said: “As with the early innovation of fire, the lack of debates around fire safety could have led to disrupted development of the technology. The same also applies to nuclear energy – and determines whether we can use nuclear energy for clean resources or for warfare. That is why it is important to define policies that indicate what is permissible innovation and what could have a negative impact on us.”
Jaspreet Bindra, founder of technology consultant AI&Beyond, agreed, arguing that Khosla's argument is fundamental to AI's journey towards superintelligence – or as is more commonly said, artificial general intelligence (AGI).
“The very idea of what could make AI dystopian will provide a fundamental basis for designing our own traffic lights that regulate the flow of AI into the future. This will help us design what the ultimate idea of AGI or superintelligence should be – it is not necessarily intended to replace humans on the evolutionary journey, but rather to complement our role in employment,” said Bindra.
Kolla further underlined that the key to looking at the future of AI innovation lies in the journey of human revolutions. “We went from the industrial revolution to the information revolution. Today, our work involves harnessing information and knowledge to define work as we know it today. In the future, as the evolution of AI continues, we will seek to harness the emotional intelligence of humans – with our knowledge playing a more crucial role in society. Machines, in turn, will gain far superior decision-making powers compared to what is defined by instructions today.
AI to create jobs
It is this that Khosla argues in his essay entitled 'AI: Dystopia or utopia?'. Khosla underlined the evolution of employment, saying that in the next five to twenty years “it is possible that AI will create new jobs that we cannot currently imagine. But in the long run, AI will eliminate most 'jobs', insofar as a job is defined as an occupation or profession that one must pursue to meet their needs and lifestyle.”
It is this that industry leaders have also underlined the journey to AGI over the past week. OpenAI CEO Sam Altman underlined that superintelligence is just “a few thousand days” away, while Anthropic co-founder expects some form of AGI to be available by 2026. However, neither envisions an AI future that doesn't involve humans.
Explaining this further, Khosla said: “We will have to redefine what it means to be human. This new definition should not focus on the need for work or productivity, but on passions, imagination and relationships, allowing for individual interpretations of humanity.