The State of Artificial Intelligence
A Growing Monopoly
While AI has long been incorporated into many industries, 2022 marked a significant uptick in its adoption among mainstream consumers. The technology is advancing at an exponential rate, frequently surpassing its own benchmarks and automating tasks previously believed to be beyond its reach. Such rapid progression means that it’s challenging to keep on top of the latest developments in the industry, with new breakthroughs happening seemingly on a daily basis.
Companies like Nvidia and OpenAI have become household names while legacy companies like Microsoft and Adobe are investing billions of dollars to weave AI into their core systems, ensuring they remain at the forefront of the competitive landscape. AI is transforming and disrupting lives.
While the fundamental principles of AI have remained consistent over the past 30 years, the exponential increase in computational power has propelled us into an era of complex and multi-layered networks. This transformation has been augmented by cutting-edge algorithm design allowing for automatic feature selection as well as back-propagation and pooling across these multiple levels. The combination of increased computing power and novel neural network design has given way to technology with state-of-the-art performance and breakthroughs in multiple fields of computing science. This includes image recognition and categorisation, speech recognition and synthesis, and dozens of other use cases.
Today’s data-driven industry leverages robust data science tools to extract meaningful insights from vast datasets, ranging from PDFs and spreadsheets to multimedia content such as images, audio, and video. This data is then harnessed by data scientists, highly proficient in machine learning, pattern recognition, language processing, computer vision, deep learning, and much more. Their goal? To unlock new frontiers of value for companies, driving revenue growth and optimising operational efficiencies in their businesses.
While these technological advances amplify AI’s potential to revolutionise various sectors, significant challenges remain. The primary beneficiaries of machine learning have been large, centralised entities that have the means to create or purchase large training data sets. These firms can also hire from the limited pool of talent capable of producing machine learning models that can benefit from training on said data sets. This exacerbates the problem of the global concentration of wealth by a few corporate entities.
In the near future, as the IoT ecosystem broadens its reach into the heterogeneous personal computing space, it will accelerate the generation of data suitable to train models. Unfortunately, the current trajectory of the ecosystem suggests a future where a greater concentration of data falls into the hands of a few large companies.
Resulting from the growth of AI and data generation, an increasing number of data scientists are entering the space to process new problem statements. Most experts in the field quickly get pulled out of academia and into large corporations, with only a handful of key industry players vacuuming up the majority of available talent.
Fortunately, there are many data scientists yet to be discovered, compared to those who are already associated with major players. This presents an opportunity for us to foster a global data scientist community, one that collaboratively and competitively crafts machine learning models that deliver immense value.
bitgrit envisions an open and free environment for AI research, rather than an exclusionary and centralised one. More importantly, by recording attribution on an immutable public ledger, we can ensure that people who create the models that deliver value are compensated with the majority of that value.
Technical Infrastructure Demand
Currently, one of the major problems facing data scientists, especially those who work in smaller corporations, is a lack of infrastructure needed to train and deploy their AI Models. While a data scientist may be a subject matter expert in their field and able to write algorithms that turn the data they are working with into a functional AI model, developing their algorithms into a fully-trained AI model requires a depth of technical knowledge outside their field of study.
This means that data scientists must become not only an expert in the efficient use of computing power, but more concerning, also have access to computing power which is severely detrimental to data scientists due to limitations in budget or time. Another challenge in the contemporary landscape of AI is the issue that general AI models have become largely ineffective in solving real world problems. In their place, data specific models have taken precedence.
With this in mind, the key to successfully bringing AI to the masses is to either adapt a pre-existing algorithm to train on a specific set of data, or create specific algorithms for specific problems.
Currently, companies such as Algorithmia and Amazon’s AI Marketplace focus on allowing a user to utilise an AI model for a specific purpose, such as gender recognition AI or a model that colorises a monochrome image. However, in real-world business cases, this level of generalisation is ineffective given the specific AI needs of corporations.
Considering this, bitgrit’s focus is not on providing access to general AI models, but rather providing the infrastructure for data scientists to train and deploy AI models for a specific business need.
Last updated