As if Sam Bankman-Fried did not have enough legal problems, US prosecutors lodged another explosive charge against him at the end of March. The fallen cryptocurrency champion was indicted by federal authorities for allegedly conspiring to bribe Chinese officials with $40 million to unfreeze $1 billion in cryptocurrency and hedge fund accounts. The irony of the situation is that the Chinese authorities had earlier frozen those accounts in a wave of crackdowns on digital financial fraud, recognizing the problem years ahead of the United States. Indeed, one of the biggest Chinese perpetrators of digital fraud, a buccaneer entrepreneur and Ponzi scheme perpetrator named Ding Ning, bears an uncanny resemblance to Bankman-Fried and his company, FTX. Like Bankman-Fried, Ding attracted billions of dollars in investments with much-hyped claims of financial innovation, creating a veneer of respectability fueled by expensively bought political connections and media prominence—all of it covering up a low-tech mess of fraud and sham bookkeeping. Lurid details emerged, like intimate relationships between employees and spending sprees fueled by millions of stolen client money. The swindling caught up with Ding in 2016, when he was busted for his role in a Ponzi scheme called Ezubao and led away in handcuffs. He was sentenced to life imprisonment the following year.
Medium上发表的一篇文章引用了CSET的Josh Goldstein、Micah Musser和CSET校友Katerina Sedova与OpenAI和斯坦福互联网天文台合作的一份报告。该报告探讨了语言模型在未来如何被滥用于影响行动,并为评估潜在的缓解策略提供了一个框架
Last year, the Internal Revenue Service (IRS) announced that it was going to begin using facial recognition to improve its identity verification process when taxpayers access its online tools (e.g., to get a copy of their tax records) to increase consumer security and reduce fraud. To do this, taxpayers would upload a copy of their government ID, along with a video selfie, to verify their identity. Not surprisingly, anti-facial recognition activists rallied their forces, claiming that it was too intrusive, too biased, and too risky. Unfortunately, their campaign was successful, as the agency announced this week that it would transition away from using the technology. This announcement is disappointing not only because it represents a step backwards for digital transformation in government but also because it shows how baseless attacks against facial recognition can win out even when they are not supported by facts or evidence. The primary reason detractors give for opposing the IRS’s use of facial recognition is their “serious concerns about privacy,” although the details of those privacy concerns are a bit murky. After all, the IRS maintains extensive records about taxpayers’ most sensitive financial information, so the idea of the agency also having access to a database of selfies does not seem particularly risky. Some objected to the IRS using a private company, ID.me, to operate its facial verification system, arguing that the company might misuse this information. But again, the IRS routinely uses contractors, including to process sensitive taxpayer information, and requires them to adhere to strict privacy controls and subjects them to penalties for violations, so there is no particular reason why facial recognition presents unique privacy risks. Critics also claim that the IRS should not use facial recognition because “research shows people of color are more likely to be misidentified.” Here too, the evidence does not support the claims, as independent testing by the National Institute of Standards and Technology (NIST) has shown that the best performing facial recognition algorithms have high accuracy rates across most demographics. In addition, the specific company’s algorithm used by ID.me has performed very well in these tests, with little variation based on demographics. Moreover, the implication of these incorrect claims about facial recognition’s “bias” seems to be that the IRS would underserve communities of color by locking them out of important government services, which shows just how little the critics understand the technology. It is important to remember that there are two types of errors—false positives (i.e., the system says two photos are of the same person but they are not) and false negatives (i.e., the system says two photos are not of the same person but they are). So higher false-positive rates do not decrease access to services because they do not stop anyone. And higher false negatives rates could potentially decrease access to services, but as NIST notes in one of its recent reports, “false negatives can often be remedied by making second attempts.” In other words, on the relatively rare instances when someone’s photo doesn’t match their government ID, such as because of poor lighting, they can probably just take a new selfie. Finally, some critics fall back on the claim that the technology presents too much of a security risk to people. For example, critics have argued that if “hackers were able to obtain the ID.me selfie records, it could be especially damaging, with potential uses ranging from committing fraud and identity theft to blackmailing people.” But a person’s face is not a secret, as anyone who has ever gone out in public can attest. The purpose of using facial recognition to enhance user authentication not because the information itself is unknown or unobtainable by anyone else, like a password or PIN, but because its is difficult for hackers to impersonate. After all, most facial recognition verification systems (including the one the IRS was using) all use a “liveness check” to ensure that the selfie is genuine and not just a photo downloaded off the Internet. As tax season gets under way, it is disappointing to see that the IRS has succumbed to the concerted attacks by advocacy groups opposed to any and all forms of facial recognition. Every year, the IRS attempts to stop billions of dollars of refund fraud, identity theft, and other financial crimes that hurt everyday Americans, and greater use of facial recognition would have been a step in the right direction. Moreover, with constrained budgets and staffing challenges, not to mention steadily increasing demands on the agency, the IRS can barely keep up with its workload. The only viable solution to this problem is greater use of automation and analytics to increase agency productivity and better use of customer-facing IT. Indeed, the IRS has already embarked on a multiyear IT modernization initiative that will require it to invest billions of dollars in technology upgrades to increase its operational efficiency, enhance the taxpayer experience, and strengthen cybersecurity. However, the IRS is destined to fail if policymakers do not give the agency sufficient latitude to embrace best-in-class services available from the private sector, including the use of facial recognition and other biometrics.
Last year, the Internal Revenue Service (IRS) announced that it was going to begin using facial recognition to improve its identity verification process when taxpayers access its online tools (e.g., to get a copy of their tax records) to increase consumer security and reduce fraud. To do this, taxpayers would upload a copy of their government ID, along with a video selfie, to verify their identity. Not surprisingly, anti-facial recognition activists rallied their forces, claiming that it was too intrusive, too biased, and too risky. Unfortunately, their campaign was successful, as the agency announced this week that it would transition away from using the technology. This announcement is disappointing not only because it represents a step backwards for digital transformation in government but also because it shows how baseless attacks against facial recognition can win out even when they are not supported by facts or evidence. The primary reason detractors give for opposing the IRS’s use of facial recognition is their “serious concerns about privacy,” although the details of those privacy concerns are a bit murky. After all, the IRS maintains extensive records about taxpayers’ most sensitive financial information, so the idea of the agency also having access to a database of selfies does not seem particularly risky. Some objected to the IRS using a private company, ID.me, to operate its facial verification system, arguing that the company might misuse this information. But again, the IRS routinely uses contractors, including to process sensitive taxpayer information, and requires them to adhere to strict privacy controls and subjects them to penalties for violations, so there is no particular reason why facial recognition presents unique privacy risks. Critics also claim that the IRS should not use facial recognition because “research shows people of color are more likely to be misidentified.” Here too, the evidence does not support the claims, as independent testing by the National Institute of Standards and Technology (NIST) has shown that the best performing facial recognition algorithms have high accuracy rates across most demographics. In addition, the specific company’s algorithm used by ID.me has performed very well in these tests, with little variation based on demographics. Moreover, the implication of these incorrect claims about facial recognition’s “bias” seems to be that the IRS would underserve communities of color by locking them out of important government services, which shows just how little the critics understand the technology. It is important to remember that there are two types of errors—false positives (i.e., the system says two photos are of the same person but they are not) and false negatives (i.e., the system says two photos are not of the same person but they are). So higher false-positive rates do not decrease access to services because they do not stop anyone. And higher false negatives rates could potentially decrease access to services, but as NIST notes in one of its recent reports, “false negatives can often be remedied by making second attempts.” In other words, on the relatively rare instances when someone’s photo doesn’t match their government ID, such as because of poor lighting, they can probably just take a new selfie. Finally, some critics fall back on the claim that the technology presents too much of a security risk to people. For example, critics have argued that if “hackers were able to obtain the ID.me selfie records, it could be especially damaging, with potential uses ranging from committing fraud and identity theft to blackmailing people.” But a person’s face is not a secret, as anyone who has ever gone out in public can attest. The purpose of using facial recognition to enhance user authentication not because the information itself is unknown or unobtainable by anyone else, like a password or PIN, but because its is difficult for hackers to impersonate. After all, most facial recognition verification systems (including the one the IRS was using) all use a “liveness check” to ensure that the selfie is genuine and not just a photo downloaded off the Internet. As tax season gets under way, it is disappointing to see that the IRS has succumbed to the concerted attacks by advocacy groups opposed to any and all forms of facial recognition. Every year, the IRS attempts to stop billions of dollars of refund fraud, identity theft, and other financial crimes that hurt everyday Americans, and greater use of facial recognition would have been a step in the right direction. Moreover, with constrained budgets and staffing challenges, not to mention steadily increasing demands on the agency, the IRS can barely keep up with its workload. The only viable solution to this problem is greater use of automation and analytics to increase agency productivity and better use of customer-facing IT. Indeed, the IRS has already embarked on a multiyear IT modernization initiative that will require it to invest billions of dollars in technology upgrades to increase its operational efficiency, enhance the taxpayer experience, and strengthen cybersecurity. However, the IRS is destined to fail if policymakers do not give the agency sufficient latitude to embrace best-in-class services available from the private sector, including the use of facial recognition and other biometrics.
每两年,我们都会报告一次易受浪费、欺诈、滥用和管理不善影响的联邦项目和运营,或者需要广泛改革的项目和运营——我们的高风险名单。我们2021年的报告回顾了进展情况,并概述了清单上各领域需要采取的进一步行动