Your pictures are now being scanned
You can remember Apple’s implement moment a few weeks ago, when users discovered their photos were being scanned by Apple Intelligence to match the monuments. Users were not told and this caused a rage with security experts. Google is now passing something the same. And again, it’s not technology, it’s secret.
Apple expanded visual search sends parts of the picture to the Cloud to match against a global index of interest points. It is very much of the maintenance of intimacy, but while the Matthew Green cryptocurrency expert complained, “it is very disappointing when you learn about a service two days before the new years and find out that it has already been activated on your phone.”
Google’s difficult moment is associated with its safetycore, an android system updating that enables the image scanning on the equipment that can do all kinds of things, but is currently focused on the blur or flag of sensitive content. Apparently it is even more private than Apple’s extended visual search, given that everything is on the device. So we were told.
But when a technology is installed and enabled in our phones without warning, insurance after the fact that they are all okay tend to meet more skepticism than it would happen if it were more open. This is the same issue as that of Apple.
I have covered safetycore before, stressing that using it to provide Google messages would be a welcome addition to Gmail, shifting the security scan from Google servers to a user’s phone. But that does not change the lack of opening point.
Grapheneos-an Android-induced security developer-inspires a comfort that safetycore “does not offer customer scanned to report google or anyone else. It provides machinery learning models in appliances from apps to classify contents like spam, fraud, malware, etc. warnings for users. “
But Grapheneos also shows that “it is unfortunate that it is not open source and released as part of the open Android source project and the models are also not open, let’s say open source … we will not We had no problems to have local nervous network features for the user, but they will have to be open source.
Google says safetycore “provides infrastructure to the equipment for secure and private classification to help users detect unwanted content. Users control security, and safetycore only classifies specific content when an app requires it through an optionally activated feature.”
And after users know it’s there, everything is true.
By ZdnetThe point is that “Google never told users this service was being installed on their phones. If you have a new Android device or one with Software updated since October, you almost have safetycore on your phone.” Like Apple with Apple , “One of the most controversial aspects of Securitycore is that it quietly installs on Android 9 equipment and later without the user’s clear consent. This step has raised concerns among users with respect to privacy and control over their equipment. “
If “you don’t trust Google” because as Zdnet He points out, “just because safetycore does not call home does not mean he can’t call on another Google service to tell Google servers you sent or taking” sensitive “photos,” you can stop it. You can find the option to uninstall or disable the service by pressing on the ‘safetycore’ under the ‘system applications’ in the main settings menu on your phone.
Lessons learned for both Apple and Google in recent weeks then. If you want to turn our phones into cars driven by it, then tell us what you are doing BEFORE You do it, and give us the opportunity to say yes or no. Otherwise it nourishes the fear of the unknown.