Google’s Gemini team is apparently sending out emails about an upcoming change to how Gemini interacts with apps on Android devices. The email informs users that, come July 7, 2025, Gemini will be able to “help you use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off.” Naturally, this has raised some privacy concerns among those who’ve received the email and those using the AI assistant on their Android devices.
It’s mostly lemmy. In real life people go from amused to indifferent. I have never met anyone as hostile as the lemmy consensus seems to be. If a feature is useful people will use it, be it AI or not AI. Some AI features are gimmicks and they largely get ignored, unless very intrusive (in which case the intrusivity, not the AI, is the problem).
I imagine even the fk_ai crowd appreciate the non-gimmick stuff as long as it is nothing like a chatbot
Tiny example from Gmail:
This is all over, and it can be super useful from time to time.
They say “f AI!” but I mean sure they don’t want better searches than were possible five years ago? If it’s not sycophantic and confabulatory etc. etc.
Good point on intrusivity
PS
PS: I translated news from Iran this week using AI tools and using traditional translators. Who would advocate for the garbage traditional translation—soon as I went the “AI” route, it was suddenly possible to understand what the journalists were trying to say. That doesn’t mean I want translators to lose their jobs, it just means I know what the best available technology is and how to use it to get a job done. (And does not mean just because it translates well that I will also trust it to summarize the article for me.)
People will also use it if it’s not useful, if it’s the default.
A friend of mine did a search the other day to find the hour of something, and google’s AI lied to her. Top of the page, just completely wrong.
Luckily I said, “That doesn’t sound right” and checked the official site, where we found the truth.
Google is definitely forcing this out, even when it’s inferior to other products. Hell, it’s inferior to their own, existing product.
But people will keep using AI, because it’s there, and it’s right most of the time.
Google sucks. They should be broken up, and their leadership barred from working in tech. We could have had a better future. Instead we have this hallucinatory hellhole.
They need a tech ethics board, and people need a license to operate or work in decision-making capacities. Also, anyone above the person’s head making an unethical decision loses their license, too. License should be cheap to prevent monopoly, but you have to have one to handle data. Don’t have a license. Don’t have a company. Plant shitty surveillance without separate, noticeable, succinctly presented agreements that are clear and understandable, with warnings about currently misunderstood uses, then you lose license. First offense.
Edit: Also mandatory audits with preformulated and separate, and succint notifications are applied. “This company sells your info to the government and police forces. Any private information, even sexual in nature, can be used against you. Your information will be used by several companies to build your complete psychological profile to sell you things you wouldn’t normally purchase and predict crimes you might commit.”
How are you evaluating inferior? I like the AI search. It’s my opinion. You have yours.
It’s one of the reasons I use Lemmy a little less these days as it’s evident to me that it’s an echo chamber for a tiny subset of humanity and at times it just feels like a circle jerk where real change isn’t an option.