Facebook just filed a patent on using social network data to influence lending decisions. God help us all.
If there was any confusion over why Facebook has continually defended its policy requiring users to display their real, legal names, the company may have finally laid it to rest with its recent patent application. Earlier this month, the social giant filed to protect a tool ostensibly designed to track how users are networked together—a tool that could be used by lenders to accept or reject a loan application based on the credit ratings of one’s social network.
In short: You could be denied a loan simply because your friends have defaulted on theirs. It’s the kind of digital redlining that critics of “big data” collection have been warning us of for years. It could make Facebook a lot of money, it could make the Web even less safe for poor people and it could be just the beginning.
Many banking institutions in the US have a long history of discriminatory lending. Federal laws passed in the 1970s made these practices illegal and further protected the poor from discriminatory credit reporting and lending practices. But these laws narrowly define lenders and creditors in ways that don’t apply so neatly in the internet age.
Depending on which factors are considered and which aren’t, predictive modeling based on one’s own history and behaviors can be terribly incorrect. When there’s more and more data to choose from, that could be good or bad news for consumers, depending on the algorithm used. Despite Facebook’s self-assured patent application and the company’s apparent confidence in its “authorized nodes,” modeling based on one’s social network only presents more opportunities for discriminatory and inaccurate conclusions.
Behavioral research consistently shows we’re more likely to seek out friends who are like ourselves, and we’re even more likely to be genetically similar to them than to strangers. If our friends are likely to default on a loan, it may well be true that we are too. Depending on how that calculation is figured, and on how data-collecting technology companies are regulated under the Fair Credit Reporting Act, it may or may not be illegal. A policy that judges an individual’s qualifications based on the qualifications of one’s social network would reinforce class distinctions and privilege. Returning to an era where the demographics of your community determined your credit-worthiness should be illegal.
Facebook’s true value comes from the data it collects on us, which it in turn sells to advertisers, lenders, and whoever else it wants to. The accuracy of that data is critical to the company’s business model success, and this patent is Facebook doubling down on the supposed truth in its networks.
But a lot of that data is bad because Facebook isn’t real life. Our social networks are not simply our friends. The way we “like” online is usually not the way we “like” in real life. Our networks are clogged with exes, old co-workers, relatives permanently set to mute, strangers and characters we’ve never even met.
On Facebook, we interact the most not with our best friends, but with those friends and acquaintances who use Facebook the most. This could lead to not only discriminatory lending decisions, but completely unpredictable ones—how will users exercise due process to determine why their loan applications were rejected when a mosaic of proprietary information formed the ultimate decision? How will users know what any of that proprietary information says about them? How will anyone know if it’s accurate? And how could this change the way we interact on the Web entirely, when fraternizing with less fiscally responsible friends or family members could cost you your mortgage?