Site icon TechNewsBoy.com

ChatGPT: How to prevent it becoming a nightmare for professional writers

Credit: Unsplash/CC0 Public Domain

Nearly half of white-collar professionals have tried using ChatGPT to help with their work, according to a recent survey of more than 10,000 people at blue chips such as Google, JP Morgan and McKinsey. That’s staggering, considering the AI chatbot was only released to the public in November. It’s potentially very exciting for the future of work, but it also brings serious risks.

ChatGPT and other imminent rivals are part of a long history of technologies geared to reducing the labor of writing. These range from the printing press to the telegram, the typewriter, word processors and personal computing.

AI chatbots can help overcome human limitations, including speed, foreign languages and writer’s block—potentially helping with everything from writing emails to reports and articles to marketing campaigns. It’s a fascinating trans-human relationship in which the AI uses past human-produced texts to inform and shape the writing of new texts by other humans.

Jobs involving significant amounts of writing will inevitably be affected most, such as journalists, academic researchers and policy analysts. In all cases, AI chatbots could allow for new knowledge and ideas to be disseminated more rapidly. Certainly it could lead to weaker, less useful writing, but if used to create a structure that is thoroughly edited by the writer using their own original ideas, it could be very beneficial.

Also, some people have a competitive advantage at writing not because their ideas are better but because they are just faster. This is often because they are writing in their first language, due to nothing more than historical coincidence. AI chatbots could therefore help make writing more inclusive and accessible.

Downsides

On the other hand, there are worries that ChatGPT and its competitors could steal many people’s jobs, especially in traditional white collar professions, though it’s very difficult to say at this stage how many people will be affected. For example Mihir Shukla, CEO and founder of California-based software company Automation Anywhere, thinks that “anywhere from 15% to 70% of all the work we do in front of the computer could be automated.” On the other hand a recent McKinsey report suggests that only about 9% of people will have to change careers. Even so, that’s a lot of people. Lower to mid-level employees are likely to be the ones most affected.

Linked to possible job losses is the danger that employers will use these technologies to justify cost savings by making existing workers use these tools “to do more with less“. Employers have historically used labor-saving devices to maximize productivity, making people work harder, not smarter or better. Computers and emails, for example, have made work never-ending for many people.

Employees could now therefore end up being pressured to produce more work. Yet this risks missing the real leap in productivity that AI could bring about. If used correctly, AI chatbots could free up employees to have more time to produce high-quality, original work.

There are additionally concerns about the human cost of creating AI chatbots. Kenyan workers, for instance, were paid between US$1 and US$2 (80 pence to £1.60) per hour to train OpenAI’s GPT-3 model, on which ChatGPT is based. Their brief was to make it less toxic by labeling thousands of samples of potentially offensive text so that the platform could learn to detect violent, racist and sexist language. This was so traumatic for the workers that the contractor nearly brought the project to an early end. Unfortunately, there’s likely to be much more of this kind of work to come.

Finally, AI chatbots raise fascinating intellectual property issues. In particular, it’s not clear who owns the work they produce. This could make it harder for companies or freelancers to protect their own output, while also potentially exposing them to copyright infringement claims from someone who owned the writing that seems to have been reproduced by the AI chatbot. It’s a complex area and it very much remains to be seen by courts will handle test cases.

It also raises questions about situations where the ownership of a piece of work is already in a gray area. While an employer will often own an employee’s written work, this has not traditionally been the case with university academics. Now, however, universities are seeking to use their power as employers to often be the first owners of academics’ published research. If they succeed, they could then put pressure on academics to use AI chatbots to increase their level of research output.

Worker-friendly AI?

One way of dealing with the dangers of heavier workloads is through regulation. At this stage, however we worry that the authorities will set more of an aspirational “ceiling” for what employers should aim to do for employees rather than a clearly regulated and enforced “floor” for ensuring decent work.

We must start developing basic standards to limit the potential for exploiting workers. This could include caps on the amount of AI-assisted written work that companies can expect of individuals, for instance. There’s clearly also an important role for raising employers’ awareness about the potential harms and benefits from these technologies.

It’s also important to recognize that the dangers are being aggravated by companies’ focus on maximizing profits and productivity. This points to the need for more alternative work environments where the emphasis is on providing workers with a good quality of life. The OECD has for instance been promoting the “social economy,” which encompasses worker and community-owned cooperatives. In such workplaces, tools such as ChatGPT have the potential to be more beneficial than threatening.

The good news is that there is probably a narrow window before these technologies transform workplaces. We tried using ChatGPT to write this article and didn’t find it particularly useful—though that may partly reflect our own inexperience at prompting the chatbot. Now is the time to recognize where this is heading and get the world up to speed. A year or two from now, workplaces could look very different.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation:
ChatGPT: How to prevent it becoming a nightmare for professional writers (2023, March 1)
retrieved 1 March 2023
from https://techxplore.com/news/2023-03-chatgpt-nightmare-professional-writers.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@technewsboy.com. The content will be deleted within 24 hours.
Exit mobile version