The Toilet Theory of the Internet

The Toilet Theory of the Internet

Allow me to explain my toilet theory of the internet. The premise, while unprovable, is quite simple: At any given moment, a great deal of the teeming, frenetic activity we experience online—clicks, views, posts, comments, likes, and shares—is coming from people who are scrolling on their phones in the bathroom.

Toilet theory isn’t necessarily literal, of course. Mindless scrolling isn’t limited to the bathroom, and plenty of idle or bored swiping happens during other down moments—while waiting in line, or sitting in gridlocked traffic. Right now, somebody somewhere is probably reading an article or liking an Instagram post with a phone in one hand and an irritable infant in the other.

The toilet theory is mostly a reminder to myself that the internet is a huge place that is visited countless times each day by billions of people in between and during all the mundane things they have to do. As a writer, I use this framework to check my ego and remember that I have precious little time to hook a reader with whatever I’m trying to get them to read—but also that my imagined audience of undistracted, fully engaged readers is an idealized one. I’m distracted just like everyone else: Sometimes I read deeply, but the majority of my nonwork surfing involves inattentively scrolling through clicky articles to find the morsel that catches my eye, or pecking out some typo-riddled phrase about a home-improvement product into Google while walking from the parking lot into Lowe’s and nearly getting hit by a vehicle.

[Read: AI search is turning into the problem everyone worried about]

I’ve been thinking about my toilet theory this week, after Google announced its new generative-AI suite of tools, including an updated version of its search engine that will “do the Googling for you.” The company has been experimenting with using generative AI at the top of its search results for a while, with mixed results: Occasionally the service “hallucinates” and confidently answers questions with made-up or incorrect information. Now the company is adding “AI Overviews,” which is a way for the company to compile and sort information in response to a question. (If you’re looking for a restaurant, it might sort options by different categories, such as the ambience offered.) Ultimately, generative search simply summarizes information from sources all around the web and presents it to people in an easily digestible format.

Organizations that rely on Google to send people to their websites—publishers, for example—are concerned by this shift. Analytics companies have dubbed such queries “zero-click searches”: Why, if the answer is right there in the search results, would most people want to follow a link to the website where the summary is derived from? And publishers have reason to be wary. Over the past 15 years, the internet has been remade in Google’s image, leading to the creation of an entire cottage industry of search-engine optimization that is dedicated to studying subtle shifts in the company’s algorithms and then, in some cases, gaming them in order to try to rank higher in Google’s results. Once beloved, a recent consensus has begun to form: People, including search experts, feel that the quality of Google’s results has degraded, in part thanks to the glut of low-quality SEO bait.

Google doesn’t seem worried. Liz Reid, the company’s head of Search, wrote on the company’s blog that “the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query.” And in an interview with the Associated Press, Reid argued, “In reality, people do want to click to the web, even when they have an AI overview. They start with the AI overview and then they want to dig in deeper.” She also noted that Google will try to use the tool to “send the most useful traffic to the web.” The implication is that Google would rather not destroy the web. After all, if people are no longer encouraged to publish information, where will the AI get its answers from?

But the quote from Reid I find most illuminating is one she delivered earlier in the week. “People’s time is valuable, right? They deal with hard things,” she said to Wired. “If you have an opportunity with technology to help people get answers to their questions, to take more of the work out of it, why wouldn’t we want to go after that?” Although I doubt she would put it this way, Reid was offering her own definition of toilet theory. People use Google to find information in a pinch—the average Googler looks less like an opposition researcher or a librarian and more like a concerned parent typing barely comprehensible phrases into their phone’s browser, along the lines of milk bird flu safe? Some people might spend a lot of time going as deep as possible, picking through search results to compare information. But one recent analysis shows that most people visit just one page when they Google; that same analysis found that about half of all search sessions are finished in less than a minute. For this reason, it’s in the company’s best interest to make using the site as quick and frictionless as possible.

Yet this is a sensitive subject for the search giant. People are wary of generative AI, sure, but the perception that Google might work better for some people by simply giving them an answer rather than expecting them to click out to another website has also been an issue in antitrust complaints against the company. No surprise, then, that Google took pains to explain how new technology could be used to encourage more web browsing. The company also unveiled LearnLM, an AI feature that the company says could function like a tutor, breaking down information that people encounter while using Google services such as Search and YouTube. In an interview, a Google executive told my colleague Matteo Wong that LearnLM is “an interactive experience with information,” one that serves users who want more than a summary and who will be more likely to click a plethora of links. Whether LearnLM and similar products work as described is an open question, as is whether people will want to partner with a large language model to do research (or whether they’ll enable the function at all).

I can’t claim to know Google’s true ambitions, but recent history has shown that technology companies often present a rosy, unrealistic view of how people will actually use their products. I’m reminded of Facebook’s move, beginning in 2017, to shift the company’s focus away from the News Feed and toward groups and private “meaningful communities.” To celebrate, Mark Zuckerberg gave a speech highlighting many of the uplifting communities on the platform—support groups for women and disabled veterans, groups for fans of the video game Civilization. He said that the company would use AI technology to recommend groups to people based on their interests. “When you bring people together, you never know where it will lead,” he told the crowd.

The quote proved telling. One lasting legacy of Facebook’s communities pivot is that it effectively helped connect large groups of vaccine skeptics, election deniers, and disinformation peddlers, who were then able to coordinate and pollute the internet with lies and propaganda. In 2020, Facebook began removing or restricting thousands of QAnon-related groups and pages, some with thousands of users, after the conspiracy movement grew unchecked on the site. Just before the 2020 election, the company was implicated when an FBI complaint revealed that a plot to kidnap Michigan Governor Gretchen Whitmer had been organized partly in a Facebook group.

[Read: Facebook has a superuser-supremacy problem]

Similarly, generative-AI sales pitches have tended to emphasize the products as assistants and productivity tools. ChatGPT and other chatbots are romanticized as creative partners and sounding boards—ways to stress-test ideas or eliminate busywork. That’s certainly true in some cases, but a rash of examples from schools and universities shows that many students see the products as a shortcut, a way to cheat and get out of the drudgery of writing term papers. Similarly, content farms aren’t using the tools as creative partners—they’re using generative AI to replace writers and churning out questionable drivel optimized for search engines (which Google might end up summarizing using its own generative AI). In this instance, what is marketed as an intelligent productivity tool is, in actuality, a race to the bottom—one that could cause the internet to eat itself.

And that brings us back to my toilet theory. I don’t mean to scold or moralize—this is just an effort to see the internet for what it is: a collection of people using the services they have at their disposal to get through their busy, messy lives.

Watching Google roll out these tools, knowing full well the realities of how people will use the products in the real world, I struggle to find a logic beyond a cynical short-term profit motive (or a desire not to be seen as losing the AI race). Google’s zero-click effect may soon create a CliffNotes version of the web, and any efforts to stop this from happening would probably involve turning away from generative AI altogether.

It is also possible (and somewhat terrifying) that Google doesn’t see a future for the web at all—at least, not the web as we know it. In an interview last year with The Verge, CEO Sundar Pichai extolled the virtues of the webpage-based internet but also offered a line that struck me when I revisited it this week. “Mobile has come, video is here to stay, and so, there’s going to be many different types of content. The web is not at the center of everything like it once was. And I think that’s been true for a while.” Left unsaid here is that the web may not be at the center of everything anymore because of Google, its slow search degradation, and its power over how and what websites publish.

Google depends on the web—the endless array of sites that it indexes in order to “organize the world’s information.” But what happens to the web when Google feels it has succeeded in accomplishing the task outlined in its mission statement? We may be about to find out.

  • https://www.msn.com/en-my/news/techandscience/the-toilet-theory-of-the-internet/ar-BB1mAdyv?ocid=00000000

Related

Google just revealed how damaging AI actually is

Google just revealed how damaging AI actually is

News
Israel carries out new strikes in Gaza, warns Lebanon's Hezbollah

Israel carries out new strikes in Gaza, warns Lebanon's Hezbollah

News
Evidence of a Wooden Structure That Predates Our Species Uncovered

Evidence of a Wooden Structure That Predates Our Species Uncovered

News
Can a Hezbollah-Israel war trigger civil strife in Lebanon?

Can a Hezbollah-Israel war trigger civil strife in Lebanon?

News
New solar technology could dramatically change how we harvest light from the Sun, scientists say

New solar technology could dramatically change how we harvest light from the Sun, scientists say

News
Inside Scotland's ancient Bone Caves

Inside Scotland's ancient Bone Caves

News
Ukrainian air base under frequent fire as Russia aims at F-16 arrivals

Ukrainian air base under frequent fire as Russia aims at F-16 arrivals

News
Trump-backed challenger beats US House Republican Bob Good in Virginia primary

Trump-backed challenger beats US House Republican Bob Good in Virginia primary

News