Generative AI: Preparing for next-gen artificial intelligence | Computer Weekly
Towards the end of last year, management consultant McKinsey published an article where the first paragraph was created by ChatGPT, the generative artificial intelligence (AI) language model.
The article’s authors admitted that the AI’s attempt was “not perfect but overwhelmingly impressive”. They noted that products like ChatGPT and GitHub Copilot take technology into realms once thought to be reserved for humans. “With generative AI, computers can now arguably exhibit creativity. They can produce original content in response to queries, drawing from data they’ve ingested and interactions with users,” they said.
McKinsey said businesses need to explore where this technology can be applied. The paper suggests a number of possible scenarios, from AI-generated personalised sales and marketing content; generating task lists and documentation in operations and identifying errors; and drafting interview questions for job interviews and summarising legal documents. There is also a role in IT, where generative AI could be used to write code and documentation, such as converting simple JavaScript expressions into Python and automatically generating data.
McKinsey urged companies considering generative AI to identify the parts of the business where the technology could have the most immediate impact. The article’s authors recommended that business leaders implement a mechanism to monitor the deployment of generative AI systems, since this type of technology is set to evolve rapidly.
Speaking to Computer Weekly last month about the role of generative AI in programming, GitHub CEO Thomas Dohmke said: “I think the next generation of developers will be used to AI and it’s going to be incredible. Technologies such as ChatGPT will enable a new way of learning, so young developers can interact with AI and learn at their own pace, whether it’s through tutorials or scripts in a predefined storyline.”
Mirroring the McKinsey article, Dohmke said generative AI could also enable developers to be more productive. “We’ve seen this in Copilot,” he said. “When you start using Copilot, it doesn’t have any information about you, so it uses the Codex model, which is a subvariant of the GPT model, to suggest code to you. But as you type, if it suggests code you don’t like, you can reject it.
“Over time, it learns what you accept or reject, and adapts to your coding style. We saw developers who were sceptical of the AI in Copilot get that ‘aha’ moment after a few days, and a couple of weeks later, they can no longer live without it.”
Rogue AI
When Meta, the owner of Facebook, released BlenderBot 3, its conversational AI system, in August 2022, the company proclaimed that it had previously collected 70,000 conversations from the public demo, which it would use to improve BlenderBot 3. However, just days after its availability in the US, reports started coming in that the system was generating racist comments and false news. For instance, the Meta bot reportedly claimed that Donald Trump was the current president of the US.
According to Meta, from feedback provided by 25% of participants on 260,000 bot messages, 0.11% of BlenderBot’s responses were flagged as inappropriate, 1.36% as nonsensical, and 1% as off-topic.
“We continue to believe that the way to advance AI is through open and reproducible research at scale,” said Joelle Pineau, managing director of fundamental AI research at Meta. “We also believe that progress is best served by inviting a wide and diverse community to participate.”
What the demo shows is that while such AI systems can, indeed, hold a conversation with someone, they rely on available learning data gleaned from conversations with people over the internet, and so their responses are based on what they “learn” from previous conversations.
In a blog post from early December, titled, Beware of “coherent nonsense” when implementing generative AI, analysts at Forrester described generative AI as analogous in terms of human development to “late childhood”. “These systems can string together words convincingly and create logical arguments, but you can’t be sure if they’re just making things up or only telling you what you want to hear,” the blog’s authors wrote.
When deciding on how to use generative AI, Forrester recommended that IT and business chiefs assess whether training data used for the AI has come from a credible source and whether this data is likely to be correct.
If an external partner trained the model, the Forrester blog’s authors urged IT leaders to consider how they would audit the data sources to ensure they can identify possible biases and confounders in the data.
Understanding context is another area to explore. Forrester suggested that IT and business leaders assess if the AI can understand new questions in reference to previous questions, and whether different answers are given by understanding the user.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.