AI is changing the world as we know it, and is doing that at an unprecedented speed. Now that Large Language Models (LLM) such as OpenAI’s GPT are made available to the general public, the impact AI can have on our lives becomes more and more visible. With over 1 billion visitors to its website in the first 4 months, it’s safe to say that ChatGPT is already having an impact on the lives of many, but what kind of impact?
After the first few months of the world being amazed by what these tools can do, and many users incorporating it in their daily (professional) lives, the positive and negative implications this technology can have are becoming clearer, leading to more alternative voices having discussions about our future with AI.
At memri we’re extremely excited about the potential of AI, but we’re also concerned about how things are done right now, especially when it comes to privacy, data ownership and transparency. We envision a future in which AI tools truly benefit the world and the people on it. That’s why we’re advocating for Regenerative AI.
Let’s start at the beginning. Artificial intelligence is essentially a lot of math. It’s software that recognizes patterns in information. That information can be anything; language, pictures, audio, browsing behavior, etc. When the software identifies these patterns, it can then be used to program actions. Some are relatively simple, such as autocomplete features. When someone types “thank '', it would suggest the next word to be “you”. Some are more complex; like using the auto-pilot to drive your Tesla around. In both cases, software that recognizes the patterns and predicts the most logical next step is what triggers the actions.
Now that the generative models such as ChatGPT are surfacing, it becomes clear that these models are easy to interact with and can do complex tasks. In some cases the answers to the questions are perfect, most of the time they’re pretty good, but sometimes they’re terrible and completely false. The reason for that is the fact that these models are trained on data; huge amounts of data, containing all sorts of information. By using incredible amounts of computational power, the model trains itself in becoming better in predicting the right outcome based on the data it has available, and formulates variables, also known as parameters.
In a way it’s a giant guessing machine, and when it needs to guess an answer about something it has never heard of, it does like we’ve all done when we were kids: it makes up an answer. This is called hallucination. The problem is, you don’t know when it does that. Not so bad if you’re asking it to predict the score of an upcoming soccer match, potentially pretty bad if you ask if a certain plant can be consumed safely.
In that sense, AI models are just like any tool; if wielded correctly they can be extremely helpful, if not, it’s useless or potentially even dangerous.
These AI models can feel like magic, but they are actually the outcome of hard work, mostly done by computers. Picking the data for these models to train on is a complicated task and something that will remain a challenge for years to come. How do you make sure the data you let the model train on is not biased? We’ve already seen many, many, (many!) cases of AI implementations that weren’t working properly due to lack of representative data.
Many of the organizations creating such models, unlike their name suggests, are not transparent. It’s unclear where their data comes from, they don’t provide insights in how the model was trained; making it hard, or even impossible to find the potential biases those models have. It makes it impossible for anyone to alter the models and improve them. It also makes it impossible to reward the creators of that data to help build the model. This lack of transparency and perhaps more specifically, the lack of reasoning why certain datasets were used and some others not, should raise questions with the users.
What also lacks transparency is what happens with the prompts users input. ChatGPT is a tool already being used by many for all sorts of tasks, and companies are struggling to understand what happens with the information that is being given as input. Samsung was one of the first to make a company policy that restricted the use of Generative AI on any company device and are working on alternative solutions after learning that internal sensitive data was being sent to the ChatGPT servers by their employees.
Ask your friends and colleagues and you’ll find many more examples of people entering data, code, or prompts that have sensitive information in it. Think pieces of code with hardcoded passwords, personal health questions, or for example entering unvalidated research data and asking AI to clean it up. Without knowing what happens with those inputs and whether they end up in the model or not, it could cause enormous data breaches. The impact is hard to predict but the fact that it requires caution and attention should be clear.
At Memri, we find it fundamental to be transparent about how models work, which data is being used, where it comes from and what happens with it.
We at memri recognize the immense power and potential of AI and more specifically, that of Regenerative AI. We see an opportunity to build and enable the creation of AI tools that embrace regenerative principles, so that the tools are not optimized for profit alone but instead have a positive impact on the larger system, just as you see in so many natural ecosystems. By ensuring data sovereignty for all users, we enable them to make better decisions about their lives with AI, which we believe will lead to more diverse and inclusive solutions. This will also provide the opportunity to challenge and rectify the biases and inequities deeply ingrained in existing AI-solutions.
Raising concerns and educating users about the potential dangers of using AI is something we find important and instrumental for change to happen. However, we hope to not end up in this list of doomsayers, so instead we’re coming with solutions as well.
We’re adopting and applying a Regenerative AI philosophy. For us it means finding life-centric, decentralized, community-led & collaborative solutions. We’re inspired by the Regenerative philosophy that embraces several core principles:
We’re building our organization, our community, our platform and our products with these principles in mind.
Regenerative AI is a broad concept, it’s a philosophy. It’s a way of looking at the things we make and actively seeking if these match the regenerative principles we embrace. Not just looking at the output a model gives, but also at, for example, its energy consumption and CO2 emissions. (Which is shockingly high for GPT-3) It’s about daring to make different choices even if they require more effort or work, simply because we believe they are more sustainable. Examples of that could be to train models only at times when green-energy is widely available, or invest time in researching models that are more energy efficient.
For us at Memri, data sovereignty and privacy is at the center of all our work. We’re standing out from many other AI- and tech-companies by ensuring user’s data is stored safe AND privately, while making sure the users can use the AI-apps they like. Especially compared to those who provide services that are “free” to use.
Our Regenerative AI approach is to co-create and provide transparency. We work together with communities and individuals around the globe to come up with holistic solutions, be more innovative, enable creativity and benefit from each other’s experience. That’s why we’re partnering with several communities already, including RegenerateX, Joint Idea, Hypha, Shala, The Heroines, Women in Ai, Unconditional Men, and many more.
Whether you’re a conscious developer concerned about the safety of their data or a user who wants better decision making with personal AI, we are inviting you to join our Regenerative AI movement. It’s up to us to shape how our future with AI will look, and we propose a communal approach to the development of this tool that can impact humanity at large.
We envision a future where individuals regain control over their digital experiences, where data sovereignty is a birthright, and where privacy and trust are foundational pillars of the online world.
By integrating regenerative AI solutions responsibly, we aim to create an ecosystem that empowers individuals, fosters inclusivity, and cultivates sustainable growth.