The article:
financial service providers increasingly utilize AI features, such as automated social media screening, to determine risk scores for their customers.
I wonder how long until the absence of a discoverable social media trail will be considered a “red flag” used to deny essential services required for remedial participation in society.
hahah! so then the user curbs, goes to social media to open an account, and bam:
Account creation aborted. Reason: Absence of discoverable social media trail
That outcome is already partially here. Some financial institutions use ‘thin file’ risk scoring — customers with minimal credit/transaction history get flagged as higher risk. The jump from ‘thin financial file’ to ‘thin digital footprint’ is shorter than it looks.
The more immediate concern is what Maeve quoted: the 269-check sweep includes ‘politically exposed persons’ matching and social media screening. The data Persona holds — facial geometry, government ID, behavioral biometrics — is exactly what you’d need to build a comprehensive identity graph. And unlike a bank, Persona has no equivalent regulatory baseline. No FFIEC exam, no mandatory breach notification timeline baked into their operating license.
The KYC mandate created the demand for this data. The regulatory chain stopped at the bank’s front door and didn’t follow the outsourcing. Persona is the gap.
When this hits every user and not just the underaged users (and it absolutely will), I’m fucking done with any platform using it. I’ve already got a Matrix server up that I’m trying to get friends to.
Same, even though I help run relatively large servers. I’m there for the communities but it would be a deal-breaker.
Once a user verifies their identity with Persona, the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years. The information the software evaluates on the images themselves includes “Selfie Suspicious Entity Detection,” a “Selfie Age Inconsistency Comparison,” similar background detection, which appears to be matched to other users in the database, and a “Selfie Pose Repeated Detection,” which seems to be used to determine whether you are using the same pose as in previous pictures. In short, the software “flags you as a ‘suspicious entity’ based on your face alone,” the researchers write. An act that may prove dangerous, as Persona’s software has reportedly made significant mistakes when attempting to estimate the age of users in the past. When paired with AML reporting, such suspicious analysis can quickly lead to the unjust termination of bank accounts. And that seems to be exactly what Persona was built to do. In addition to facial recognition, Persona’s software is able to perform checks on financial data — including running checks on sanctions lists, running checks on cryptocurrency activity via the blockchain analysis firms Chainalysis and TRM Labs, and an interface to file suspicious activity reports (SARs) directly with US and Canadian federal agencies.
🤯
Well if I wasn’t done using discord already, I’d for sure be done using it now…
Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies
I think the eletion on discord in Nepal scared them.




