Google Maps’ New Vibe Feature Provides More Info But Could Be Biased - Updated

The most popular spots will still be the most recommended

  • Google says it plans to roll out a new feature to its Maps apps that give users the “vibe” of a neighborhood. 
  • Some experts say that the feature could lead to bias. 
  • One observer says that places of interest highlighted are more likely to be in gentrifying neighborhoods.


Someone using maps on an iPhone.

Marianna Massey / Getty Images

A new Google Maps feature is intended to help you get a "vibe" about where you're going, but the technology could be prone to bias. 

Neighborhood Vibe works by showing user reviews as you're panning through the area. Another new feature also allows users to see how busy a neighborhood might be, based on Google's crowd-level data from that business, and what the weather may be like on any day they're planning to arrive. While the new Maps update hasn't rolled out, some experts see the potential for trouble.. 

"It's standard practice for computer scientists to continuously improve AI models based on new data," Daniel Wu, a researcher in the Stanford AI Lab and cofounder of the Stanford Trustworthy AI Institute, which focuses on technical research to make AI safe, told Lifewire in an email interview. "What that means is, as Google rolls this feature out, they'll likely be training the model to show reviews that more people click on or find useful. But this can lead to a biased sample of reviews." 

Whose Vibe?

To determine the vibe of a neighborhood, Google says it combines AI with local knowledge from Google Maps users who add more than 20 million contributions to the map each day—including reviews, photos, and videos. 

"Say you're on a trip to Paris—you can quickly know if a neighborhood is artsy or has an exciting food scene so you can make an informed decision on how to spend your time," the company wrote on its blog. 

Someone searching for a place using the maps app on their smartphone.

Oscar Wong / Getty Images

Herve Andrieu, a Google Maps Local Guide, who doesn't work for the company but runs a private website on the subject, said in an email interview that maps users provide data by informing Google Maps of where they want to go and sharing their location at the very least when using the app. There are also contributing users who provide extra information. 

Andrieu said that bias might arise with established existing points of interest. "The algorithm will necessarily keep recommending the most popular spot, which in turn will always attract more users, which in turn proves the AI to be correct," he added. "I am wondering how 'local gems,' i.e., lesser known, less frequented spots, will get a chance to appear."

The Vibe feature "can lead to biased results when places of interest highlighted are more likely to be in gentrifying neighborhoods or predominantly in affluent areas, while restaurants and establishments operating in primarily minority neighborhoods (or minority-owned businesses) are less likely to be so highlighted," Anjana Susarla, the Omura-Saxena Professor in Responsible AI at the Broad College of Business at Michigan State University told Lifewire via email.

“Neighborhood vibe highlights popular spots in an area based on contributions from the Google Maps community - a diverse set of people with different backgrounds and experiences,” Google spokesperson Genevieve Park told Lifewire via email. “When it launches, it’ll be available for all neighborhoods around the world, making it easy to see a range of popular places at a glance - from local gems to newer establishments. As always, we take multiple steps to ensure that Google Maps accurately reflects the real world.”

Preventing AI Bias

Modern AI employs a general technique known as deep learning, where these features can be automatically inferred and extracted from the underlying data without the need for a researcher to select them by hand, Flavio Villanustre, the global chief information security officer for LexisNexis Risk Solutions, told Lifewire in an email interview.

In this process performed on systems such as Google Maps, deep learning and researchers probably identified features that make a neighborhood reputable, desirable, or trustworthy and established a certain correlation with specific characteristics.

"For example, higher levels of poverty could correlate with the proximity to clusters of fast-food chain restaurants; higher income populations may reside closer to luxury stores," Villanustre said. "But while doing so, if the data is not normalized by protected classes of individuals (e.g., skin color, religion, ethnicity, gender, etc.), it's quite possible the resulting model will leverage proxies to these classes, as it infers 'desirability.' Some of these proxies can affect the results of the model and those protected classes in a negative manner."

Nabeel Ahmad, a professor at Columbia University in Human Capital Management, told Lifewire in an email interview that bias in AI cannot be entirely prevented. Instead, developers can take steps to reduce bias in AI. 

"First, use multiple data sources to reduce over-reliance on any single data source," Ahmad said. "Second, have a governance system of people who define what the AI model should be doing (i.e., parameters to take into consideration, etc.), what its expected output should be, and routinely run tests to check how accurate the AI results are to expectations. Last, make adjustments over time as needed to fine-tune the AI so that it provides more accurate and useful results." 

Update: Bias in AI is a constant concern because large datasets are not necessarily representative of what they’re supposed to reflect, Irina Raicu, the director of internet ethics at the Markkula Center for Applied Ethics at Santa Clara University, told Lifewire in an email interview. 

Given the complexity of the non-digital world, “large amounts of data” can still mean “incomplete and inaccurate,” Raicu said. “Bias can be expressed even at the level of what we choose to measure--what we choose to turn into data--not just by not including certain variables (or people) in a data set in representative numbers, but also by not developing certain datasets at all.”

Correction 10/7/22: Updated paragraph two for clarity and paragraph nine to include a response from Google.

Was this page helpful?