AI here to stay.



Jacky Liang is living in the future.

An artificial-intelligence engineer in Philadelphia, he uses generative AI at work and in his personal life “as much as possible—to the point that even my girlfriend is like ‘Babe, please.’”

The tools he’s using—to look things up during his downtime, brainstorm for work, punch up his résumé, or write blog posts—go well beyond the kind of first-generation AI that is already embedded in our daily lives, sorting our social media feeds, catching credit card fraud and recognizing faces in our photos. The tools Liang relies on are all next-generation generative AIs, things like OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, and Inflection’s Pi.

Soon, most of us will use tools like these, even if indirectly, unless we want to risk falling behind. We will face a growing number of communications generated with AI assistance, plans made with their input, and even products they helped inspire. Productivity-enhancing technology tends to improve our output or make it more plentiful, forcing people to change how they work but not reducing the hours they spend at it. This means the gap between those using AI for productivity, and everyone else, threatens to widen into a chasm as we contend with more and more stuff produced by the combination of human minds and new kinds of machine assistance.

A recent global survey of 10,000 people by tech and consulting firm Capgemini found that people who have used generative AI tools for basic tasks like searching for and summarizing information were on the whole highly satisfied with them. For now, the generative AI tools that can boost people’s productivity require an early adopter’s mindset, since the purveyors of these tools are still unknown to many, and using them to best effect remains an uncommon skill.

ut recently, the giants of the U.S. tech industry made it clear they have plans to bring the capabilities of generative AI deep into tools most of us use every day, where they will be nearly impossible to avoid.

Suddenly ubiquitous

In just the past two weeks, Microsoft announced deep integration of generative AI tools across Windows 11; Google rolled out changes to its Bard generative AI that allow it to use all your documents, emails and calendar items as fodder; Amazon showed off the next generation of generative AI capabilities for its Alexa smart assistant, which should make it chattier and more flexible; and Meta announced it would make a chat-based assistant, as well as a host of other chatbots based on celebrities, available in Instagram, WhatsApp and Facebook.

Even Apple—which has yet to announce its own text-based generative AI but is developing one—last week rolled out a new accessibility feature for iPhones that uses a different form of generative AI to clone a user’s voice.

The sudden accessibility and ubiquity of generative AI tools do not guarantee that they’ll be used. And these are very much first-generation technologies, full of frustrating limitations. But if the utility that early adopters already get out of them is any indication, adoption by the masses will soon follow.

Across the country AI chatbots are now taking fast-food drive-thru orders. WSJ’s Joanna Stern put the tech through a series of tests at a Hardee’s—including blasting dog barking sounds and asking some crazy questions.

As more people use AI to help them generate written and visual communications more quickly, the volume of that content is likely to increase. This could mean AI will also be needed to respond to this uptick in information—in the form of better filters for it, but also in the use of AI to help generate responses to it.

Those who don’t opt to use AI to help them summarize others’ reports (likely generated with the help of AI), respond to emails (ditto) or adapt to new business processes (also created with the help of AI) risk drowning in a fire hose of communications and increased complexity.

Another way generative AI could make itself impossible to avoid: by becoming the default interface for information retrieved from the internet, and within companies. Already, one of the things language-based generative AI systems are pretty good at is search and summarization.

One potential stumbling block to the use of AI in this way: It often makes stuff up, a tendency that is inherent to the way it works, and may be unavoidable. This reduces its value somewhat, as it means that we can’t just hand tasks over to AI, and all of its work must be checked. But AI is still pretty good at taking care of a lot of rote tasks—like writing often-used, boilerplate code or text—and can save its users time by turning them into editors, rather than content creators.

Humiliated by a robot

This talent for making information more accessible—and transforming it into other kinds of information more easily—is apparent in Google’s new Bard rollout, called Extensions.

Enabling Bard to search and summarize across everything in your Google account yields, in my own experience, some astonishing results. For example, I asked it to summarize recent documents I’d created that contained ideas for a specific creative project. It not only delivered a succinct summary of the contents of these disparate documents, but it also editorialized—correctly—that the ideas contained in them were at an early stage. (Note to future historians: The kind of low-key humiliation represented by a robot dispassionately observing that a human’s ideas are half-baked began approximately…now.)

Comments

Popular posts from this blog

A slump in technology companies

How we trurn $1000.00 into $3000.00 .

Stocks dropped Friday