view | ChatGPT is really changing how I do my work

City says 7,000 summer jobs are available for Boston youth ages 14 to 18

Once you start using ChatGPT, you pretty much can’t stop. It begins with trivial and attention-grabbing claims: Is this arithmetic matter. Tell me some vegetarian recipes with broccoli and peas. Which came first, the jockstrap or the jockstrap?

But as an AI chatbot easily dispatches your electronic gadgets, you start to take them more seriously. For weeks and months you manipulate it, learn about its abilities and flaws, imagine its possibilities for good and bad, the possibility of its ubiquitousness and indispensability. Soon, ChatGPT starts to dig a groove in your life. Now you think of it differently – less as a dancing pony than as a workhorse. You find yourself reaching for it for tasks big and small, and although it fails frequently, it seems useful enough that you can imagine a lot of people will soon rely on it.

I’ve experienced this creeping sense of possibility with new technology quite a few times in my life. The last time it was an iPhone; Others may have been Google search and the Internet itself. All of these were groundbreaking in the beginning, but none of them changed anything overnight. Instead, what was more compelling was how easy it was to imagine them becoming more and more useful to an increasing number of people. Five years after Apple unveiled the iPhone, there seemed to be an app for everything, and… Almost half of American adults Owned a smartphone five years later, only more Three-quarters of them didAnd it was hard to think of anything smartphones hadn’t changed.

ChatGPT looks similarly great. It’s been less than five months since artificial intelligence company OpenAI released its chatbot. ChatGPT is far from perfect; OpenAI continues to refer to it as a “research preview”. However, as my colleagues documented in The Upshot recently, doctors, software engineers, fiction writers, stay-at-home parents, and many others are already beginning to rely on AI to perform important tasks.

These accounts echo my own experience. As I’ve learned what it can and can’t do, ChatGPT has earned a regular place in my workflow – and in my concerns. I keep thinking about new assignments for it, the different ways it might change my job and the larger media industry, and the new ethical, legal, and philosophical questions it raises for journalism and how people get their news.

Other tech-friendly journalists I know have gone through something similar: Suddenly, we have something like a jetpack to tether our work. Sure, the jetpack is kind of buggy. Yes, sometimes it crashes and burns out. And the rules for its use are not clear, so you need to be very careful with it. But sometimes it goes up, reducing tasks that would have taken hours to mere minutes, sometimes minutes to seconds.

It will likely take years of trial and error – perhaps a huge mistake – to figure out how it should fit into the profession. Steve Duenes, deputy managing editor at The Times, told me that a newsroom working group is currently developing guidelines and exploring opportunities for chatbot use by journalists at the paper.

Even as we’re figuring it all out, to me, this already seems obvious: sooner rather than later, something like ChatGPT will become a regular part of many journalists’ toolkit.

Here are some of the ways I used it:

Search for words. One common concern about ChatGPT is that people will pass off its content as their own, but I don’t think that’s in the near future yet. ChatGPT is a vain writer—his prose is dull and riddled with clichés (“The Human Condition,” “Humble Beginnings,” “Triumph Over Adversity” barf).

But where it really helps is searching for that perfect word or phrase you’re having a hard time summoning. In my jetpack analogy above, I originally wrote that when the jetpack is running, it “screams”. I knew “shouting” wasn’t right; Before ChatGPT I would have used a thesaurus or banged my head against a wall until the right word came to me. This time I just connected the entire paragraph to ChatGPT and asked it for alternate verbs; “Fly,” her most important suggestion, was just the word I had in mind.

This may seem like a small win, but these things add up. I spent many painful minutes of my life searching for the right word. ChatGPT makes this problem a thing of the past.

jaw. Insider global editor Nicholas Carlson sent a message note to his team members last week, to encourage To start, try ChatGPT with caution. Carlson had been using the chatbot extensively, and told me he had begun to think of it as a “two-player word processor” that could help people overcome routine typing hurdles.

Take the problem of transitions – you’ve written two sections of an essay and are having a hard time writing a paragraph that takes the reader from one part to the next. You can now connect both sections in ChatGPT and request his thoughts. The proposed transition from ChatGPT probably won’t be great, but even bad ideas can help get around a ban. “As a writer, I like to get an idea from an editor to rewrite it so it’s mine,” Carlson told me. ChatGPT acts as that editor – your always-available talking friend.

summary. When huge, complex news is published — a court ruling, an earnings report, a politician’s financial disclosure forms — editors and reporters often have to pinpoint the gist of the news in order to know how to cover it. ChatGPT excels at this kind of boilerplate: give it a long document and it’ll pull up large attributes instantly and apparently reliably.

Carlson used it this way when Donald Trump was indicted: He gave ChatGPT the shipping documents and asked them for a 300-word summary. “I want the reporter to read the entire indictment and get it right,” he told me, but in the moment of breaking news, Carlson just wanted the big picture. “I did it,” he said, “and it was helpful.”

But wait a second. How could Carlson ensure that the ChatGPT summary was accurate enough to rely on in determining how to cover a story? More generally, how can any journalist be sure that anything ChatGPT says is reliable?

The short answer is: you can’t. ChatGPT and other chatbots have been known to make things up or post incorrect information. They are also black boxes. Even ChatGPT’s creators don’t quite know why he suggests certain ideas to others, how his prejudices work, or what other ways he might fail.

Problems like these call for great caution in their use – and as far as I can tell, the publications are very careful. (Carlson also created a working group to come up with guidelines for using ChatGPT at Insider.)

There are many other ways I can imagine using ChatGPT in the news business: an editor can call it to generate key ideas. The audio producer can ask him interview questions from a podcast guest. A reporter approaching a new topic might ask to suggest five experts to speak with, to get an update.

But some of them can be very problematic. If ChatGPT is involved in choosing the sources we talk to or the questions we ask, then the work of journalists will be affected on some level by this shadowy medium whose biases and motives we can’t see. (For now, don’t worry, I looked up Duenes and Carlson on my own, without the help of ChatGPT.)

Carlson brought up one idea that I liked: think of ChatGPT as a half-trusted source. He suggested, “Trust him the same way you trust a chatterbox in a three-drinker who pretends to know it all.” You check everything the source says – a lot of times it can be nonsense, but sometimes it turns out that the gossip knows what he’s talking about.

This is how ChatGPT is changing my industry. I imagine similar thorny questions worry many other professions. And there won’t be many easy answers.

Farhad wants Chat with readers on the phone. If you are interested in speaking to a New York Times columnist about anything you have in mind, please fill out this form. Farhad will select a few readers to contact.