
By Laine Penney
AI has been rapidly developing over the recent years, and there has been much concern as to what its future holds. Some are highly supportive and show excitement for new inventions that will change people’s lives for the better, while others highlight the harm that it has done to careers.
Many people are either with AI or against it, with no consideration to the fact that the type of AI that they praise or condemn, is just one side of the story. People are concerned with two different types of AI: generative AI and assistive AI. But what’s the difference?
Generative AI is the most easily accessible form of AI, a quick Google search can give access to a wide range of models, most notably ChatGPT. It “helps you get answers, find inspiration and be more productive” according to the website. It’s one of the most popular AI models out there, as it provides users with text and images generated in seconds, which is why it’s classified as generative AI.
Assistive AI, however, highlights the technology used for completing tasks alongside humans, such as robots who help organize objects or deliver items. It is often developed by STEM industries, such as healthcare utilizing it to detect cancer, and it’s praised for its capabilities and the causes that it contributes to, such as being able to explore and map entire caves in order for search teams to easily find and rescue people in danger.
These technologies are capable of extraordinary things, but why is there so much backlash? For starters, companies train generative AI off of data from the Internet, which is usually hundreds of millions of posts and images. However, these companies usually do not ask for permission for the use of this content, which means a great deal of original work is stolen.
The company Getty Images filed a complaint back in February 2023 against Stability AI, accusing them of copying “more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images, as part of its efforts to build a competing business.” This case is still ongoing, with Getty’s claim being dropped and new secondary copyright infringement claims rising.
Social media companies also train their own AIs off of user data, with some apps not providing users with an easy ability to opt out. Pinterest lets you opt out of the training of their AI model “Pinterest Canvas” by turning it off in profile settings. Meanwhile, Meta requires you to fill out an entire form and provide an explanation.
In general, generative AI is associated with stolen work, privacy invasion and copyright infringement issues, as it is trained off of user data and content on the Internet without permission from creators. But what’s so bad about assistive AI?
Assistive AI is built on a similar foundation as most AI; it runs on algorithms and is trained on data. However, instead of generating content based off of human prompts, it seeks to improve upon, tweak, and add information to prompts. But with insufficient data, it can lead to bias and inaccuracy. “If the designers do not provide representative data, the resulting AI systems become biased and unfair,” according to Dylan Losey at Virginia Tech.
Sometimes, generative AI works hand in hand with assistive AI. If you give the right prompt, ChatGPT can prove to tweak and assist your work rather than just do the work for you. So, generative AI can be a component of assistive AI, however assistive AI is not a component of generative AI.
There are many pros and cons for the different uses of AI, but the main issue that arises with both generative AI and assistive AI is our dependence on them. With generative AI, we allow technology to think for us, losing our sense of critical thinking and individuality. With assistive AI, we allow technology to do things for us, which may put job security at risk for humans.
Are we going to abolish AI? Not any time soon. As many issues as it has, the amount of amazing capabilities it has can’t be undermined. It’s promised to save lives with its cancer predicting algorithms and search-and-rescue technology. That’s why it’s on humanity’s shoulders to not abuse this power, and should it be that someone decides to do so, we need to have the right technology at hand to combat any sort of harm. And if that technology enlists the help of artificial intelligence, then so be it.