While ChatGPT stokes fears of mass layoffs, new jobs are being spawned to review AI

by | Oct 12, 2023 | Financial

The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.CFOTO | Future Publishing via Getty ImagesArtificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.That has, understandably, generated fears that jobs may be displaced by AI.Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations. But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.Getting paid to review AIProlific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.They must provide consent to engage in the research.One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.The research participant, wh …

Article Attribution | Read More at Article Source

Share This