- PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
- EDIT Edit this Article
- EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
- Browse Articles
- Learn Something New
- Quizzes Hot
- This Or That Game
- Train Your Brain
- Explore More
- Support wikiHow
- About wikiHow
- Log in / Sign up
- Computers and Electronics
- Online Communications
How to Get ChatGPT to Write an Essay: Prompts, Outlines, & More
Last Updated: June 2, 2024 Fact Checked
Getting ChatGPT to Write the Essay
Using ai to help you write, expert interview.
This article was written by Bryce Warwick, JD and by wikiHow staff writer, Nicole Levine, MFA . Bryce Warwick is currently the President of Warwick Strategies, an organization based in the San Francisco Bay Area offering premium, personalized private tutoring for the GMAT, LSAT and GRE. Bryce has a JD from the George Washington University Law School. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 47,865 times.
Are you curious about using ChatGPT to write an essay? While most instructors have tools that make it easy to detect AI-written essays, there are ways you can use OpenAI's ChatGPT to write papers without worrying about plagiarism or getting caught. In addition to writing essays for you, ChatGPT can also help you come up with topics, write outlines, find sources, check your grammar, and even format your citations. This wikiHow article will teach you the best ways to use ChatGPT to write essays, including helpful example prompts that will generate impressive papers.
Things You Should Know
- To have ChatGPT write an essay, tell it your topic, word count, type of essay, and facts or viewpoints to include.
- ChatGPT is also useful for generating essay topics, writing outlines, and checking grammar.
- Because ChatGPT can make mistakes and trigger AI-detection alarms, it's better to use AI to assist with writing than have it do the writing.
- Before using the OpenAI's ChatGPT to write your essay, make sure you understand your instructor's policies on AI tools. Using ChatGPT may be against the rules, and it's easy for instructors to detect AI-written essays.
- While you can use ChatGPT to write a polished-looking essay, there are drawbacks. Most importantly, ChatGPT cannot verify facts or provide references. This means that essays created by ChatGPT may contain made-up facts and biased content. [1] X Research source It's best to use ChatGPT for inspiration and examples instead of having it write the essay for you.
- The topic you want to write about.
- Essay length, such as word or page count. Whether you're writing an essay for a class, college application, or even a cover letter , you'll want to tell ChatGPT how much to write.
- Other assignment details, such as type of essay (e.g., personal, book report, etc.) and points to mention.
- If you're writing an argumentative or persuasive essay , know the stance you want to take so ChatGPT can argue your point.
- If you have notes on the topic that you want to include, you can also provide those to ChatGPT.
- When you plan an essay, think of a thesis, a topic sentence, a body paragraph, and the examples you expect to present in each paragraph.
- It can be like an outline and not an extensive sentence-by-sentence structure. It should be a good overview of how the points relate.
- "Write a 2000-word college essay that covers different approaches to gun violence prevention in the United States. Include facts about gun laws and give ideas on how to improve them."
- This prompt not only tells ChatGPT the topic, length, and grade level, but also that the essay is personal. ChatGPT will write the essay in the first-person point of view.
- "Write a 4-page college application essay about an obstacle I have overcome. I am applying to the Geography program and want to be a cartographer. The obstacle is that I have dyslexia. Explain that I have always loved maps, and that having dyslexia makes me better at making them."
Tyrone Showers
Be specific when using ChatGPT. Clear and concise prompts outlining your exact needs help ChatGPT tailor its response. Specify the desired outcome (e.g., creative writing, informative summary, functional resume), any length constraints (word or character count), and the preferred emotional tone (formal, humorous, etc.)
- In our essay about gun control, ChatGPT did not mention school shootings. If we want to discuss this topic in the essay, we can use the prompt, "Discuss school shootings in the essay."
- Let's say we review our college entrance essay and realize that we forgot to mention that we grew up without parents. Add to the essay by saying, "Mention that my parents died when I was young."
- In the Israel-Palestine essay, ChatGPT explored two options for peace: A 2-state solution and a bi-state solution. If you'd rather the essay focus on a single option, ask ChatGPT to remove one. For example, "Change my essay so that it focuses on a bi-state solution."
Pay close attention to the content ChatGPT generates. If you use ChatGPT often, you'll start noticing its patterns, like its tendency to begin articles with phrases like "in today's digital world." Once you spot patterns, you can refine your prompts to steer ChatGPT in a better direction and avoid repetitive content.
- "Give me ideas for an essay about the Israel-Palestine conflict."
- "Ideas for a persuasive essay about a current event."
- "Give me a list of argumentative essay topics about COVID-19 for a Political Science 101 class."
- "Create an outline for an argumentative essay called "The Impact of COVID-19 on the Economy."
- "Write an outline for an essay about positive uses of AI chatbots in schools."
- "Create an outline for a short 2-page essay on disinformation in the 2016 election."
- "Find peer-reviewed sources for advances in using MRNA vaccines for cancer."
- "Give me a list of sources from academic journals about Black feminism in the movie Black Panther."
- "Give me sources for an essay on current efforts to ban children's books in US libraries."
- "Write a 4-page college paper about how global warming is changing the automotive industry in the United States."
- "Write a 750-word personal college entrance essay about how my experience with homelessness as a child has made me more resilient."
- You can even refer to the outline you created with ChatGPT, as the AI bot can reference up to 3000 words from the current conversation. For example: "Write a 1000 word argumentative essay called 'The Impact of COVID-19 on the United States Economy' using the outline you provided. Argue that the government should take more action to support businesses affected by the pandemic."
- One way to do this is to paste a list of the sources you've used, including URLs, book titles, authors, pages, publishers, and other details, into ChatGPT along with the instruction "Create an MLA Works Cited page for these sources."
- You can also ask ChatGPT to provide a list of sources, and then build a Works Cited or References page that includes those sources. You can then replace sources you didn't use with the sources you did use.
Expert Q&A
- Because it's easy for teachers, hiring managers, and college admissions offices to spot AI-written essays, it's best to use your ChatGPT-written essay as a guide to write your own essay. Using the structure and ideas from ChatGPT, write an essay in the same format, but using your own words. Thanks Helpful 0 Not Helpful 0
- Always double-check the facts in your essay, and make sure facts are backed up with legitimate sources. Thanks Helpful 0 Not Helpful 0
- If you see an error that says ChatGPT is at capacity , wait a few moments and try again. Thanks Helpful 0 Not Helpful 0
- Using ChatGPT to write or assist with your essay may be against your instructor's rules. Make sure you understand the consequences of using ChatGPT to write or assist with your essay. Thanks Helpful 1 Not Helpful 0
- ChatGPT-written essays may include factual inaccuracies, outdated information, and inadequate detail. [3] X Research source Thanks Helpful 0 Not Helpful 0
You Might Also Like
Thanks for reading our article! If youâd like to learn more about completing school assignments, check out our in-depth interview with Bryce Warwick, JD .
- â https://help.openai.com/en/articles/6783457-what-is-chatgpt
- â https://platform.openai.com/examples/default-essay-outline
- â https://www.ipl.org/div/chatgpt/
About This Article
- Send fan mail to authors
Is this article up to date?
Featured Articles
Trending Articles
Watch Articles
- Terms of Use
- Privacy Policy
- Do Not Sell or Share My Info
- Not Selling Info
wikiHow Tech Help:
Tech troubles got you down? We've got the tips you need
- Discovery Channel Shows
- The transcendence of cars
- French Revolution in Haiti (1976)
- Harry Potter is a Machine learning researcher
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- NEWS EXPLAINER
- 09 December 2022
AI bot ChatGPT writes smart essays â should professors worry?
- Chris Stokel-Walker
You can also search for this author in PubMed Google Scholar
Between overwork, underpayment and the pressure to publish, academics have plenty to worry about. Now thereâs a fresh concern: ChatGPT , an artificial intelligence (AI) powered chatbot that creates surprisingly intelligent-sounding text in response to user prompts, including homework assignments and exam-style questions. The replies are so lucid, well-researched and decently referenced that some academics are calling the bot the death knell for conventional forms of educational assessment. How worried should professors and lecturers be?
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
24,99 ⏠/ 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
185,98 ⏠per year
only 3,65 ⏠per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
doi: https://doi.org/10.1038/d41586-022-04397-7
Reprints and permissions
Related Articles
Are ChatGPT and AlphaCode going to replace programmers?
How language-generation AIs could transform science
Open-source language AI challenges big techâs models
- Computer science
Accelerating AI: the cutting-edge chips powering the computing revolution
News Feature 04 JUN 24
Who owns your voice? Scarlett Johansson OpenAI complaint raises questions
News Explainer 29 MAY 24
Low-latency automotive vision with event cameras
Article 29 MAY 24
Underfunding cannabis research hampers sensible policymaking and boosts the black market
Correspondence 04 JUN 24
What Modiâs third term in India means for science
News 04 JUN 24
Defying the stereotype of Black resilience
Career Q&A 30 MAY 24
Brazilâs plummeting graduate enrolments hint at declining interest in academic science careers
Career News 21 MAY 24
Reading between the lines: application essays predict university success
Research Highlight 17 MAY 24
How to stop students cramming for exams? Send them to sea
News & Views 30 APR 24
Post-Doctoral Fellow in Chemistry and Chemical Biology
We are seeking a highly motivated, interdisciplinary scientist to investigate the host-gut microbiota interactions that are associated with driving...
Cambridge, Massachusetts
Harvard University - Department of Chemistry and Chemical Biology
Postdoc Position (f/m/d) in âBuilding Healthcare Resilience Against Cyber-Attacks"
Karlsruhe Institute of Technology (KIT) â The Research University in the Helmholtz Association creates and imparts knowledge for the society and th...
76344, Eggenstein-Leopoldshafen (DE)
Karlsruher Institut fĂŒr Technologie (KIT) Campus Nord
Research assistant (Praedoc) (m/f/d) - Department of Biology, Chemistry, Pharmacy
Department of Biology, Chemistry, Pharmacy - Institute of Chemistry and Biochemistry AG Absmeier  Research assistant (Praedoc) (m/f/d) with 65%-pa...
14195, Berlin (DE)
Freie UniversitÀt Berlin
Professor, Associate Professor, Postdoctoral Fellow Recruitment
Candidate shall have an international academic vision, and have a high academic level and strong scientific research ability.
Shenzhen, Guangdong, China
Shenzhen University of Advanced Technology
Open Faculty Position in Mathematical and Information Security
We are now seeking outstanding candidates in all areas of mathematics and information security.
Dongguan, Guangdong, China
GREAT BAY INSTITUTE FOR ADVANCED STUDYïŒ Institute of Mathematical and Information Security
Sign up for the Nature Briefing newsletter â what matters in science, free to your inbox daily.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
More From Forbes
Educators battle plagiarism as 89% of students admit to using openaiâs chatgpt for homework.
- Share to Facebook
- Share to Twitter
- Share to Linkedin
Who's teaching who?
A large majority of students are already using ChatGPT for homework assignments, creating challenges around plagiarism , cheating, and learning. According to Wharton MBA Professor Christian Terwisch, ChatGPT would receive âa B or a B-â on an Ivy League MBA-level exam in operations management. Another professor at a Utah-based university asked ChatGPT to tweet in his voice - leading Professor Alex Lawrence to declare that âthis is the greatest cheating tool ever inventedâ, according to the Wall Street Journal . The plagiarism potential is potent - so, is banning the tool a realistic solution?
New research from Study.com provides eye-opening insight into the educational impact of ChatGPT , an online tool that has a surprising mastery of learning and human language. INSIDER reports that researchers recently put ChatGPT through the United States Medical Licensing exam (the three-part exam used to qualify medical school students for residency - basically, a test to see if you can be a doctor). In a December report, ChatGPT âperformed at or near the passing threshold for all three exams without any training or reinforcement.â Lawrence, a professor from Weber State in Utah who tested via tweet, wrote a follow-up message to his students regarding the new platform from OpenAI: âI hope to inspire and educate you enough that you will want to learn how to leverage these tools, not just to learn to cheat better.â No word on how the students have responded so far.
Machines, tools and software have been making certain tasks easier for us for thousands of years. Are we about to outsource learning and education to artificial intelligence ? And what are the implications, beyond the classroom, if we do?
Considering that 90% of students are aware of ChatGPT, and 89% of survey respondents report that they have used the platform to help with a homework assignment, the application of OpenAIâs platform is already here. More from the survey:
- 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper.
- 72% of college students believe that ChatGPT should be banned from their college's network. (New York, Seattle and Los Angeles have all blocked the service from their public school networks).
- 82% of college professors are aware of ChatGPT
- 72% of college professors who are aware of ChatGPT are concerned about its impact on cheating
- Over a third (34%) of all educators believe that ChatGPT should be banned in schools and universities, while 66% support students having access to it.
- Meanwhile, 5% of educators say that they have used ChatGPT to teach a class, and 7% have used the platform to create writing prompts.
Best High-Yield Savings Accounts Of 2024
Best 5% interest savings accounts of 2024.
A teacher quoted anonymously in the Study.com survey shares, â'I love that students would have another resource to help answer questions. Do I worry some kids would abuse it? Yes. But they use Google and get answers without an explanation. It's my understanding that ChatGPT explains answers. That [explanation] would be more beneficial.â Or would it become a crutch?
Modern society has many options for transportation: cars, planes, trains, and even electric scooters all help us to get around. But these machines havenât replaced the simple fact that walking and running (on your own) is really, really good for you. Electric bikes are fun, but pushing pedals on our own is where we find our fitness. Without movement comes malady. A sedentary life that relies solely on external mechanisms for transport is a recipe for atrophy, poor health, and even a shortened lifespan. Will ChatGPT create educational atrophy, the equivalent of an electric bicycle for our brains?
Of course, when calculators came into the classroom, many declared the decline of math skills would soon follow. Research conducted as recently as 2012 has proven this to be false. Calculators had no positive or negative effects on basic math skills.
But ChatGPT has already gone beyond the basics, passing medical exams and MBA-level tests. A brave new world is already here, with implications for cheating and plagiarism, to be sure. But an even deeper implication points to the very nature of learning itself, when ChatGPT has become a super-charged repository for what is perhaps the most human of all inventions: the synthesis of our language. (That same synthesis that sits atop Blooms Taxonomy - a revered pyramid of thinking, that outlines the path to higher learning ). Perhaps educators, students and even business leaders will discover something old is new again, from ChatGPT. That discovery? Seems Socrates was right: the key to strong education begins with asking the right questions. Especially if you are talking to a âbot.
- Editorial Standards
- Reprints & Permissions
Join The Conversation
One Community. Many Voices. Create a free account to share your thoughts.
Forbes Community Guidelines
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
In order to do so, please follow the posting rules in our site's Terms of Service. We've summarized some of those key rules below. Simply put, keep it civil.
Your post will be rejected if we notice that it seems to contain:
- False or intentionally out-of-context or misleading information
- Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
- Attacks on the identity of other commenters or the article's author
- Content that otherwise violates our site's terms.
User accounts will be blocked if we notice or believe that users are engaged in:
- Continuous attempts to re-post comments that have been previously moderated/rejected
- Racist, sexist, homophobic or other discriminatory comments
- Attempts or tactics that put the site security at risk
- Actions that otherwise violate our site's terms.
So, how can you be a power user?
- Stay on topic and share your insights
- Feel free to be clear and thoughtful to get your point across
- âLikeâ or âDislikeâ to show your point of view.
- Protect your community.
- Use the report tool to alert us when someone breaks the rules.
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.
We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .
For inquiries related to this message please contact our support team and provide the reference ID below.
Advertisement
Supported by
OpenAI Says It Has Begun Training a New Flagship A.I. Model
The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.âs risks.
- Share full article
By Cade Metz
Reporting from San Francisco
OpenAI said on Tuesday that it had begun training a new flagship artificial intelligence model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
The San Francisco start-up, which is one of the worldâs leading A.I. companies, said in a blog post that it expected the new model to bring âthe next level of capabilitiesâ as it strove to build âartificial general intelligence,â or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Appleâs Siri, search engines and image generators.
OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.
âWhile we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,â the company said.
OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity . Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.
OpenAIâs GPT-4 , which was released in March 2023, enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data. An updated version of the technology , which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.
Days after OpenAI showed the updated version â called GPT-4o â the actress Scarlett Johansson said it used a voice that sounded âeerily similar to mine.â She said that she had declined efforts by OpenAIâs chief executive, Sam Altman, to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using the voice. The company said the voice was not Ms. Johanssonâs.
Technologies like GPT-4o learn their skills by analyzing vast amounts of digital data, including sounds, photos, videos, Wikipedia articles, books and news articles. The New York Times sued OpenAI and Microsoft in December, claiming copyright infringement of news content related to A.I. systems.
Digital âtrainingâ of A.I. models can take months or even years. Once the training is completed, A.I. companies typically spend several more months testing the technology and fine-tuning it for public use.
That could mean that OpenAIâs next model will not arrive for another nine months to a year or more.
As OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said. The committee includes Mr. Altman, as well as the OpenAI board members Bret Taylor, Adam DâAngelo and Nicole Seligman. The company said the new policies could be in place in the late summer or fall.
This month, OpenAI said Ilya Sutskever, a co-founder and one of the leaders of its safety efforts, was leaving the company . This caused concern that OpenAI was not grappling enough with the dangers posed by A.I.
Dr. Sutskever had joined three other board members in November to remove Mr. Altman from OpenAI, saying Mr. Altman could no longer be trusted with the companyâs plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Mr. Altmanâs allies, he was reinstated five days later and has since reasserted control over the company.
Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways of ensuring that future A.I. models would not do harm. Like others in the field, he had grown increasingly concerned that A.I. posed a threat to humanity.
Jan Leike, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the teamâs future in doubt.
OpenAI has folded its long-term safety research into its larger efforts to ensure that its technologies are safe. That work will be led by John Schulman, another co-founder, who previously headed the team that created ChatGPT. The new safety committee will oversee Dr. Schulmanâs research and provide guidance for how the company will address technological risks.
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz
Explore Our Coverage of Artificial Intelligence
News  and Analysis
Google appears to have rolled back its new A.I. Overviews  after the technology produced a litany of untruths and errors.
OpenAI said that it has begun training a new flagship A.I. model  that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
Elon Muskâs A.I. company, xAI, said that it had raised $6 billion , helping to close the funding gap with OpenAI, Anthropic and other rivals.
The Age of A.I.
After some trying years during which Mark Zuckerberg could do little right, many developers and technologists have embraced the Meta chief  as their champion of âopen-sourceâ A.I.
DâYouville University in Buffalo had an A.I. robot speak at its commencement . Not everyone was happy about it.
A new program, backed by Cornell Tech, M.I.T. and U.C.L.A., helps prepare lower-income, Latina and Black female computing majors  for A.I. careers.
Chinaâs answer to OpenAI is a Xi Jinping chatbot
Good morning. Clay Chandler here, writing from Hong Kong. Last week I noted that the worldâs two big AI superpowers seem to be running the global AI arms race in opposite directions . U.S. lawmakers have balked at imposing even the most minimal restrictions on fast-moving new technologies. China, meanwhile, has established a dense regulatory framework for AI designed to eliminate all possible risks.Â
Those differences become even more starkly apparent this week. Â
In the U.S., of course, everyone is aghast at Scarlett Johanssonâs allegation that OpenAI CEO Sam Altman used a voice âeerilyâ similar to hers  for Sky, the chatbot mode featured in OpenAIâs latest ChatGPT upgrade. Few are buying Altmanâs insistence that he ânever intendedâ the chatbot to resemble her. The details of the caseâthat Altman approached Johansson with a request to license her voice, that she declined, that he persisted using a Johansson-like voice anywayâhave fanned Hollywoodâs worst fears about arrogant tech bros using AI to rip off creators. Â
In China, the weekâs big AI story is that the nationâs internet regulator is rolling out a chatbot of its own, this one based on the thoughts of President Xi Jinping. The Financial Times reports that a research center reporting to the powerful Cyberspace Administration of China is developing a large language model trained on the Chinese leadersâ political philosophy, known as âXi Jinping Thought on Socialism with Chinese Characteristics for a New Era.â The Wall Street Journal says the chatbot will also be trained on six professional databases about technology. Â
Itâs unclear whether the CACâs chatbotâwhich both papers have dubbed âChat Xi PTââis meant to be used, or even whether it will be released to the public. But itâs not difficult to imagine how such a model might be employed as a tool for enforcing ideological orthodoxy. Â
Neither of these approaches to governing AI seems sustainable to me. At some point, U.S. voters are going to stop swallowing AI developersâ claims that the only way the U.S. can hope to compete with China, save democracy, and preserve the American way of life is to let giant tech companies use AI in whatever way they want. And surely Chinese officials eventually will figure out that too much state control over AI will slow the pace of innovation and leave China less secure not more. Or will they? For now, the technology keeps getting smarter faster than the people creating and using it.  Â
More news below
Clay Chandler [email protected] Follow on LinkedIn
Saudi Aramco, the worldâs largest oil company, is aiming for net-zero emissions while continuing to pump fossil fuels. Ahmad Al-Khowaiter, executive vice president for technology and innovation, says Aramco devotes 60% of its $800 million R&D budget to âsustainability.â But the company isnât going to give up oil any time soon: âWe need all sources of energy to meet the growth in demand, which is just tremendous in the developing world,â he says. Fortune
An even bigger Boeing cash burn
Boeing CFO Brian West warned investors that the planemaker is unlikely to generate any cash this year and will burn billions of dollars more than anticipated. The company is producing planes more slowly as it tries to uncover safety and production issues. Boeing shares fell over 7% on Thursday. Wall Street Journal
Back to the office
More banks are ordering more of their U.S.-based staff to be in the office five days a week, due to new regulations that make remote work more difficult. HSBC, Citigroup and Barclays are asking remote workers to come in for the whole work week. Banking executives say that ensuring off-site workers comply with regulations will be too difficult and too costly as the industryâs watchdog returns to pre-COVID norms for monitoring staff and inspecting workplaces. Bloomberg
AROUND THE WATERCOOLER
Ralph Laurenâs longtime CFO on preparing her successorâand whatâs next after her COO tenure by Sheryl Estrada
5 telltale signs a CEO is a narcissist. Study finds LinkedIn profiles offer clues by Lila MaclellanÂ
OpenAIâs week of chaos has reopened a festering wound at the $80 billion startup that was supposed to have healed by Sharon Goldman
Book excerpt: Gen Z ignores brand messaging by default. Hereâs how to win their attentionâand loyalty by Mitch Duckler
More than two-thirds of bosses are âaccidental managersââand their requests for proper training are being ignored by Eleanor Pringle
Veniceâs âtourist taxâ is being labeled a âmiserable failureââand the project might not break even this year by Ryan Hogg
T his edition of CEO Daily was curated by Nicholas Gordon.Â
Latest in Newsletters
Intel comes out swinging as AI PC market explodes
E*Trade can boot Roaring Kittyâbut someone will just take his place
How AI and high interest rates shaped this yearâs Fortune 500
The share of Fortune 500 businesses run by women canât seem to budge beyond 10%
96% of executives are desperate for workers to use AI, but there are a few key obstacles in the way
Cisco leaps forward in Fortune 500ââI see us clearly as a leader in AI,â CFO saysÂ
Most popular.
Media mogul Rupert Murdoch, 93, ties the knot for the 5th time, marrying the ex-wife of a billionaire energy investor and Russian politician
Rachel Romer built a $4.4 billion education unicorn by 34âthen she had a stroke. Now her CEO successor reckons with Guildâs new chapter
Elon Musk accused of selling $7.5 billion of Tesla stock before releasing disappointing sales data that plunged the share price to two-year low
Traders who scooped up Warren Buffettâs Berkshire Hathaway shares at a massive $620,000 discount during âglitchâ will have their deals canceled by the NYSE
Managers are puzzled by Gen Zers as giving feedback becomes a lost art in the era of the âcoddled mindâ
Home prices could start dropping this summer, marking the âfirst domino to fallâ on the way to a weaker economy, strategist says
Media Companies Are Making a Huge Mistake With AI
News organizations rushing to absolve AI companies of theft are acting against their own interests.
In 2011, I sat in the Guggenheim Museum in New York and watched Rupert Murdoch announce the beginning of a ânew digital renaissanceâ for news. The newspaper mogul was unveiling an iPad-inspired publication called The Daily . âThe iPad demands that we completely reimagine our craft,â he said. The Daily shut down the following year, after burning through a reported $40 million.
For as long as I have reported on internet companies, I have watched news leaders try to bend their businesses to the will of Apple, Google, Meta, and more. Chasing techâs distribution and cash, news firms strike deals to try to ride out the next digital wave. They make concessions to platforms that attempt to take all of the audience (and trust) that great journalism attracts, without ever having to do the complicated and expensive work of the journalism itself. And it never, ever works as planned.
Publishers like News Corp did it with Apple and the iPad, investing huge sums in flashy content that didnât make them any money but helped Apple sell more hardware. They took payouts from Google to offer their journalism for free through search, only to find that it eroded their subscription businesses. They lined up to produce original video shows for Facebook and to reformat their articles to work well in its new app. Then the social-media company canceled the shows and the app. Many news organizations went out of business.
The Wall Street Journal recently laid off staffers who were part of a Google-funded program to get journalists to post to YouTube channels when the funding for the program dried up . And still, just as the news business is entering a death spiral, these publishers are making all the same mistakes, and more, with AI.
Adrienne LaFrance: The coming humanist renaissance
Publishers are deep in negotiations with tech firms such as OpenAI to sell their journalism as training for the companiesâ models. It turns out that accurate, well-written news is one of the most valuable sources for these models, which have been hoovering up humansâ intellectual output without permission. These AI platforms need timely news and facts to get consumers to trust them. And now, facing the threat of lawsuits, they are pursuing business deals to absolve them of the theft. These deals amount to settling without litigation. The publishers willing to roll over this way arenât just failing to defend their own intellectual propertyâthey are also trading their own hard-earned credibility for a little cash from the companies that are simultaneously undervaluing them and building products quite clearly intended to replace them.
Late last year Axel Springer, the European publisher that owns Politico and Business Insider , sealed a deal with OpenAI reportedly worth tens of millions of dollars over several years. OpenAI has been offering other publishers $1 million to $5 million a year to license their content . News Corpâs new five-year deal with OpenAI is reportedly valued at as much as $250 million in cash and OpenAI credits. Conversations are heating up. As its negotiations with OpenAI failed, The New York Times sued the firmâas did Alden Global Capital, which owns the New York Daily News and the Chicago Tribune . They were brave moves, although I worry that they are likely to end in deals too.
That media companies would rush to do these deals after being so burned by their tech deals of the past is extraordinarily distressing. And these AI partnerships are far worse for publishers. Ten years ago, it was at least plausible to believe that tech companies would become serious about distributing news to consumers. They were building actual products such as Google News. Todayâs AI chatbots are so early and make mistakes often. Just this week, Googleâs AI suggested you should glue cheese to pizza crust to keep it from slipping off.
OpenAI and others say they are interested in building new models for distributing and crediting news, and many news executives I respect believe them. But itâs hard to see how any AI product built by a tech company would create meaningful new distribution and revenue for news. These companies are using AI to disrupt internet searchâto help users find a single answer faster than browsing a few links. So why would anyone want to read a bunch of news articles when an AI could give them the answer, maybe with a tiny footnote crediting the publisher that no user will ever click on?
Companies act in their interest. But OpenAI isnât even an ordinary business. Itâs a nonprofit (with a for-profit arm) that wants to promote general artificial intelligence that benefits humanityâthough it canât quite decide what that means. Even if its executives were ardent believers in the importance of news, helping journalism wouldnât be on their long-term priority list.
Ross Andersen: Does Sam Altman know what heâs creating?
Thatâs all before we talk about how to price the news. Ask six publishers how they should be paid by these tech companies, and they will spout off six different ideas. One common idea publishers describe is getting a slice of the tech companiesâ revenue based on the percentage of the total training data their publications represent. Thatâs impossible to track, and thereâs no way tech companies would agree to it. Even if they did agree to it, there would be no way to check their calculationsâthe data sets used for training are vast and inscrutable. And letâs remember that these AI companies are themselves struggling to find a consumer business model. How do you negotiate for a slice of something that doesnât yet exist?
The news industry finds itself in this dangerous spot, yet again, in part because it lacks a long-term focus and strategic patience. Once-family-owned outlets, such as The Washington Post and the Los Angeles Times , have been sold to interested billionaires. Others, like The Wall Street Journal , are beholden to the public markets and face coming generational change among their owners. Television journalism is at the whims of the largest media conglomerates, which are now looking to slice, dice, and sell off their empires at peak market value. Many large media companies are run by executives who want to live to see another quarter, not set up their companies for the next 50 years. At the same time, the industryâs lobbying power is eroding. A recent congressional hearing on the topic of AI and news was overshadowed by OpenAI CEO Sam Altmanâs meeting with House Speaker Mike Johnson . Tech companies clearly have far more clout than media companies.
Things are about to get worse. Legacy and upstart media alike are bleeding money and talent by the week. More outlets are likely to shut down, while others will end up in the hands of powerful individuals using them for their own agendas (see the former GOP presidential candidate Vivek Ramaswamyâs activist play for BuzzFeed ).
The long-term solutions are far from clear. But the answer to this moment is painfully obvious. Publishers should be patient and refrain from licensing away their content for relative pennies. They should protect the value of their work, and their archives. They should have the integrity to say no. Itâs simply too early to get into bed with the companies that trained their models on professional content without permission and have no compelling case for how they will help build the news business.
Instead of keeping their business-development departments busy, newsrooms should focus on what they do best: making great journalism and serving it up to their readers. Technology companies arenât in the business of news. And they shouldnât be. Publishers have to stop looking to them to rescue the news business. We must start saving ourselves.
InfoQ Software Architects' Newsletter
A monthly overview of things you need to know as an architect or aspiring architect.
View an example
We protect your privacy.
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
- English edition
- Chinese edition
- Japanese edition
- French edition
Back to login
Login with:
Don't have an infoq account, helpful links.
- About InfoQ
- InfoQ Editors
Write for InfoQ
- About C4Media
Choose your language
Special Memorial Day Sale with significant discounts of up to 60% off . Register now.
Special Summer Sale up to 60% off. Only 150 tickets available at this price.
Level up your software skills by uncovering the emerging trends you should focus on. Register now.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage News OpenAI Publishes GPT Model Specification for Fine-Tuning Behavior
OpenAI Publishes GPT Model Specification for Fine-Tuning Behavior
Jun 04, 2024 2 min read
Anthony Alford
OpenAI recently published their Model Spec , a document that describes rules and objectives for the behavior of their GPT models. The spec is intended for use by data labelers and AI researchers when creating data for fine-tuning the models.
The Model Spec is based on existing internal documentation used by OpenAI in their reinforcement learning from human feedback (RLHF) training used to fine-tune recent generations of their GPT models. The Spec contains three types of principles: objectives, rules, and defaults. Objectives define broad descriptions of desirable model behavior: "benefit humanity." Rules are more concrete, and address "high-stakes" situations that should never be overridden by users: "never do X." Finally, the Spec includes default behaviors that, while they can be overridden, provide basic style guidance for responses and templates for handling conflicts. According to OpenAI :
As a continuation of our work on collective alignment and model safety, we intend to use the Model Spec as guidelines for researchers and AI trainers who work on reinforcement learning from human feedback. We will also explore to what degree our models can learn directly from the Model Spec. We see this work as part of an ongoing public conversation about how models should behave, how desired model behavior is determined, and how best to engage the general public in these discussions.
In 2022, OpenAI introduced a fine-tuned version of GPT-3 called InstructGPT . The model was fine-tuned using RLHF on a dataset of ranked model outputs. The idea was to make the model more "aligned" with user intent and reduce false or toxic output. Since then, many research teams have done similar instruction-tuning on their LLMs. For example, Google's Gemini model is also fine-tuned with RLHF. Meta's Llama 3 is also instruction-tuned, but via a different fine-tuning method, direct preference optimization (DPO).
The key to instruction-tuning, however, is the dataset of prompt inputs with multiple outputs ranked by human labelers. Part of the purpose of the Model Spec is to guide the labelers in ranking outputs. OpenAI also claims to be working on methods for automating the instruction-tuning process directly from the Model Spec. Because of this, much of the content of the Model Spec are examples of user prompts along with "good" and "bad" responses.
Many of the rules and defaults in the Spec are intended to address common abuses of LLMs. For example, the rule to follow the chain of command is designed to help prevent the simple "jailbreak" of prompting the model to ignore previous instructions. Other specifications are intended to shape the responses of the model, especially when refusing to perform a task; according to the Spec , "refusals should be kept to a sentence and never be preachy."
Wharton Professor and AI researcher Ethan Mollick posted about the Model Spec on X:
As people have pointed out in the comments, Anthropic has its Constitution. I find it to be much less weighty as a statement & less clarifying, since it outlines generally good stuff & tells the AI to be good, making it hard to understand the difficult choices between principles.
Anthropic introduced the idea of Constitutional AI in 2022. This process uses an AI model to rank outputs for instruction-tuning. Although Anthropic's code is not open-source, the AI community HuggingFace published a reference implementation of Constitutional AI based on Anthropic's work.
About the Author
Rate this article, this content is in the ai, ml & data engineering topic, related topics:.
- AI, ML & Data Engineering
- Large language models
- Generative AI
- Neural Networks
- Deep Learning
Related Editorial
Related sponsored content, popular across infoq, what's new in c# 13: enhanced params, performance boosts, and new extension types, asp.net core updates in .net 9 preview 4: support for openapi doc generation, hybridcache and more, java news roundup: java turns 29, kotlin 2.0, semantic kernel for java 1.0, more openjdk updates, spring ecosystem releases focus on spring boot, spring session and spring security, retrieval-augmented generation (rag) patterns and best practices, architecture modernization with nick tune, related content, the infoq newsletter.
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example
With AI writing so much code, should you still study computer science? This new data point provides an answer.
- UC Berkeley sees a 48% jump in first-year applications to study computer science.
- Despite generative AI advances, students are eager to pursue computer science careers.
- Human developers remain essential for creating something new.
One of the most persistent concerns around generative AI is whether the technology will put workers out of a job. This idea has particularly caught on in the context of software coding .
Github Copilot can write a lot of code these days, so is it even worth studying computer science now? That's been a question on the minds of math-minded high schoolers since ChatGPT burst on the scene in 2022.
There's a new data point that helps answer at least part of this question: Students are still lining up in droves to take computer science in college.
An eye-popping data point
Let's take The University of California Berkeley as an example, as this college at or near the top for computer science.
First-year applications to UC Berkeley's College of Computing, Data Science, and Society CDSS increased 48% this year. There were 14,302 (non-transfer) applications for these CDSS majors in the Fall 2024 incoming class, versus 9,649 the previous year.
For context, the number of first-year applications to UC Berkeley as a whole didn't change much from a year earlier.
Related stories
This was announced last week by Professor Jennifer Chayes, the dean of Berkeley's College of CDSS. She popped these eye-popping stats during a fireside chat with Governor Gavin Newsom and Stanford Professor Fei-Fei Li at the at the Joint California Summit on Generative AI in San Francisco.
There's a role for human software developers
Afterwards, I got in touch with John DeNero, Computer Science Teaching Professor at UC Berkeley, to talk about this some more.
He's also chief scientist at Lilt , a generative AI startup, and he was previously a researcher at Google working on Google Translate , one of the first successful AI-powered consumer apps.
"Students express some concern that generative AI will affect the software engineering job market, especially for entry-level positions, but they are still excited about careers in computing," he wrote in an email to Business Insider. "I tell them that I think many of the challenging aspects of software development can't be performed reliably by generative AI at this point, and that I expect there will still be a central role for human software developers long into the future."
AI can't do new things very well
Generative AI is currently very good at replicating parts of software programs that have been written many times before, DeNero explained.
That includes computer science homework assignments! See BI's coverage on how much ChatGPT is used to cheat on homework .
What if you want to create something new? This is where smart human coders will still be needed. (This makes logical sense as AI models are trained on data. If that information doesn't exist yet or it's not part of the training dataset, the models often get in trouble).
Generative AI "requires a lot of thoughtful human intervention to produce something new, and all consequential software development projects involve quite a bit of novelty," DeNero said. "That's the hard and interesting part of computing that currently requires clever and well-trained people."
"Generative AI can speed up the more mundane parts of software development, and software developers tend to adopt efficiency tools quickly," he added.
What happens at Lilt?
This applies to what's happening at Lilt, which is building an AI platform for translators.
Google Translate first came out 18 years ago. And still, human linguists have jobs and are relied upon when translations are really important. For instance, you can use Google Translate to read a Japanese train timetable maybe, but would you use the app to translate your business's most important contract without having a human expert check it? Probably not.
"To reliably produce publication-quality translations, human expert linguists are still at the center of the process, but by using Lilt's task-specific generative AI models, those experts are much faster, more accurate, and more consistent," DeNero said. "As a result, more text gets translated at higher quality into more languages."
He expects this same pattern to play out in software development: A small team of highly trained human developers will have an even greater capacity to build useful high-quality software.
"And so, future Berkeley graduates will have plenty of opportunities to use their computing skills to improve the world," DeNero said. "Hopefully some more of them will come work for Lilt."
Watch: AI expert discusses generative AI: What it means and how it will impact our future
- Main content
Corral Fire
- 90% Contained
- 14,168 Acres
- 1 County: San Joaquin
ALERTCalifornia Camera Feed
Resources assigned.
- 475 Personnel
- 15 Water Tenders
- 40 Other Assigned
Numerous firefighting air tankers from throughout the State are flying fire suppression missions as conditions allow.
Status Update
Situation summary.
Weather conditions became more favorable for firefighters, allowing crews to make progress constructing and improving control lines.Â
Evacuation Information
- sjready.org/
Damages and Destruction
Confirmed Damage to Property, Injuries, and Fatalities.
- 1 Structures Destroyed Residential, Commercial and Other
- 2 Injuries Confirmed Fire Personnel and Civilian Injuries
Social Media
- @calfireSCU
IMAGES
VIDEO
COMMENTS
3. Ask ChatGPT to write the essay. To get the best essay from ChatGPT, create a prompt that contains the topic, type of essay, and the other details you've gathered. In these examples, we'll show you prompts to get ChatGPT to write an essay based on your topic, length requirements, and a few specific requests:
Write With Transformer Get a modern neural network to auto-complete your thoughts. This web app, built by the Hugging Face team, is the official demo of the đ€/transformers repository's text generation capabilities. Star Models. đŠ GPT-2 ... Released by OpenAI, this seminal architecture has shown that large gains on several NLP tasks can be ...
Explore resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's developer platform.
Longer context. GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user's writing style. Input. Explain the plot of Cinderella in a sentence where each word has to begin with the next ...
ChatGPT is the brainchild of AI firm OpenAI, based in San Francisco, California. ... noting that students have long been able to outsource essay writing to human third parties through ...
AI Essay Writer đȘ. By writeanypapers.com. The AI-powered essay writer from ChatGPT generates free essays across various styles, ensuring all writing is original and plagiarism-free. Sign up to chat. Requires ChatGPT Plus.
Write me a 100-word essay in the voice of a high school student explaining why I would love to attend Dartmouth to pursue a double major in biology and computer science. ... OpenAI said that it ...
Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and topics effortlessly.
The arrival of OpenAI's ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogetherâand maybe even the ...
We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.
Explore our models and APIs in the Playground without writing a single line of code. Visit Playground (opens in a new window) Assistants API. Build AI assistants within your own applications that can leverage models, tools, and knowledge to do complex, multi-step tasks. ... Healthify collaborates with OpenAI to improve millions of lives with ...
Whether you need an essay writer or a speed boost for that last-minute assignment, you may be wondering how to use ChatGPT to write an essay. Since its public release in November 2022, OpenAI's AI Chatbot has seen several updates to the quality of natural language processing (NLP) that guides it toward a high-quality, human writing style ...
2. Enter your name and (if you want) organization, then verify your phone number. 3. When you're asked How will you primarily use OpenAI, choose the option that says I'm exploring personal use ...
#OpenAI #writing #outline When you start a writing assignment, it's easy to get overwhelmed about what you should and shouldn't include. Or you could use Ope...
48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper. 72% of college students believe that ChatGPT should ...
Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples ...
Read the full essays. Beatrice Nolan. Mar 3, 2023, 8:23 AM PST. ChatGPT's essays were based on some old questions from the Common App. Chuck Savage. I got OpenAI's ChatGPT to write some college ...
A new chatbot created by artificial intelligence non-profit OpenAI Inc. has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays.
Peter Laffin is a writing instructor and founder of the private tutoring program Crush the College Essay. He says that tools like OpenAI's are emblematic of other compensation techniques that ...
OpenAI's GPT-4, which was released in March 2023, enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data.
OpenAI's Sky chatbot sounded eerily similar to Scarlett Johansson despite her not giving consent. Nathan CongletonâNBC/Getty Images Good morning. Clay Chandler here, writing from Hong Kong. Last ...
In our evaluations on a "challenge set" of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as "likely AI-written," while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier's reliability typically improves as the length of the input text ...
OpenAI has been offering other publishers $1 million to $5 million a year to license their content. News Corp's new five-year deal with OpenAI is reportedly valued at as much as $250 million in ...
OpenAI recently published their Model Spec, a document that describes rules and objectives for the behavior of their GPT models. The spec is intended for use by data labelers and AI researchers when c
Hi, I'm currently using Chat GPT to help me brainstorm and summarize notes for a book I'm writing and I recently noticed that huge swaths of my conversations spanning weeks or months are missing. I realized this when I was searching for specific prompts and quotes I saved from previous chats and nothing was returned. Digging deeper, I saw that entire conversations I had about character ...
The Atlantic's product team will give OpenAI feedback from its experiments and will share use cases for how AI can improve news experiences in ChatGPT and other OpenAI products. Vox Media will leverage OpenAI's technology to build internal and audience-facing capabilities and products, the company said in a statement.
Helen Toner and Tasha McCauley were on OpenAI's board from 2021 to 2023 and from 2018 to 2023, respectively. Read a response to this article by Bret Taylor, the chair of Open AI' s board, and ...
UC Berkeley sees a 48% jump in first-year applications to study computer science. Despite generative AI advances, students are eager to pursue computer science careers. Human developers remain ...
Introducing ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an ...
Situation Summary. Weather conditions became more favorable for firefighters, allowing crews to make progress constructing and improving control lines.