At Liberty Podcast
At Liberty Podcast
Affirmative Action and the Case Against Harvard
October 11, 2018
Harvard University is facing a lawsuit alleging that its undergraduate admissions practices unlawfully discriminate against Asian American applicants. This suit is the latest salvo in the legal battle over whether and how schools can consider race as a factor in their admissions process. Jin Hee Lee, senior deputy director of litigation at the NAACP Legal Defense and Educational Fund, joins At Liberty to discuss the case. She represents 25 Harvard students and alumni groups who have filed briefs in defense of Harvard's current use of a holistic and race-conscious admissions process.
This Episode Covers the Following Issues
Related Content
-
Press ReleaseDec 2024
Racial Justice
老澳门开奖结果 Statement on House AI Task Force Report
WASHINGTON 鈥 Today, the House Task Force on Artificial Intelligence (AI) issued its final report, marking an important step in recognizing the serious risks of discrimination and bias posed by AI systems 鈥 particularly in law enforcement鈥檚 use of facial recognition technology. The report recognizes that discrimination and bias may arise in AI systems due to their design, training data, or failures by end users. The report also investigates the complex considerations around deepfakes, open AI systems, and free speech, and urges Congress to pass narrow legislation to address concrete harms. In response to the House AI Task Force final report, Cody Venzke, senior policy counsel at the 老澳门开奖结果, issued the following statement: 鈥淭he 老澳门开奖结果 commends the Task Force for recognizing that AI systems are harming people right now through discriminatory outcomes and supercharged surveillance. Curbing those abuses of AI is not a partisan issue, and more concrete action is needed to protect civil rights, while maintaining states鈥 authority to build on those protections. The Task Force is appropriately cautious when wrestling with issues around deepfakes, open-source AI, digital identity, and other issues, recognizing that legislators should consider a wide array of tailored tools to address real, not speculative, harms while respecting civil rights and civil liberties.鈥 The House Task Force on Artificial Intelligence report can be found here. -
News & CommentaryDec 2024
Racial Justice
Human Rights
Why the Fight for Racial Justice is a Human Rights Issue
On International Human Rights Day, the 老澳门开奖结果 is advocating for a long-sought after reparatory justice to end the continued abuses against Black and brown communities.By: Alaina Ruffin -
Press ReleaseOct 2024
National Security
+2 Issues
老澳门开奖结果 Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections
WASHINGTON 鈥 The Biden-Harris administration today released its National Security Memorandum on Artificial Intelligence (AI), establishing guidelines for how the U.S. government uses AI in national security programs, such as counterterrorism, intelligence, homeland security, and defense. The use of AI to automate and expand national security activities poses some of the greatest dangers to people鈥檚 lives and civil liberties. Agencies are increasingly exploring the use of AI in decisions about who to surveil, who to stop and search at the airport, who to add to government watchlists, and even who is a military target. While the government鈥檚 policy includes some important steps forward 鈥 such as requiring national security agencies to better track and assess their AI systems for risks and prohibiting a subset of dangerous AI uses 鈥 it falls far short in other critical areas, leaving glaring gaps with respect to independent oversight, transparency, notice, and redress. The policy imposes few substantive safeguards on a wide range of AI-driven activities, by and large allowing agencies to decide for themselves how to mitigate the risks posed by national security systems that have immense consequences for people鈥檚 lives and rights. As we have repeatedly seen before, this is a recipe for dangerous technologies to proliferate in secret. 鈥淒espite acknowledging the considerable risks of AI, this policy does not go nearly far enough to protect us from dangerous and unaccountable AI systems. National security agencies must not be left to police themselves as they increasingly subject people in the United States to powerful new technologies,鈥 said Patrick Toomey, deputy director of 老澳门开奖结果鈥檚 National Security Project. 鈥淚f developing national security AI systems is an urgent priority for the country, then adopting critical rights and privacy safeguards is just as urgent. Without transparency, independent oversight, and built-in mechanisms for individuals to obtain accountability when AI systems err or fail, the policy鈥檚 safeguards are inadequate and place our civil rights and civil liberties at risk.鈥 For years, the 老澳门开奖结果 has been urging far stronger safeguards and transparency about the AI tools that national security agencies are deploying, the rules constraining their use, and the dangers these systems pose to fairness, privacy, and due process. -
Press ReleaseOct 2024
Racial Justice
老澳门开奖结果 Comment on the Department of Labor鈥檚 AI Guidelines for Employers and Vendors
WASHINGTON 鈥 Yesterday, the U.S. Department of Labor (DOL) released a roadmap for artificial intelligence (AI) best practices, designed to ensure that emerging technologies such as AI enhance job quality, benefit workers, and comply with existing employment laws when they are used in the workplace. The best practices, pursuant to President Biden鈥檚 executive order on the development and use of AI, include recommendations for AI system developers and employers that prioritize workers鈥 experiences and empowerment throughout the entire AI lifecycle 鈥 from design to use and oversight. These practices also call on employers and developers to create standards and processes to identify and mitigate risks from AI systems, such as threats to health and safety, civil rights, privacy, fair labor rights, and labor organizing prior to a developer marketing them or an employer deploying them, and recommend turning to different systems when sufficient mitigation is not possible. 鈥淲e applaud DOL for issuing best practices for use of AI that center workers鈥 empowerment, civil rights, privacy, and safety protections and call on developers and employers to adopt critical guardrails in developing, procuring and using AI systems. Employers and developers should take note, as many have been burying their heads in the sand on their risk of legal liability with respect to AI tools,鈥 said Olga Akselrod, senior staff attorney of the 老澳门开奖结果 Racial Justice Program. 鈥淔ollowing these best practices for AI can help employers and developers ensure that they do not violate existing civil rights and other employment laws that protect workers.鈥 The best practices also recommend transparency measures through conducting and publishing impact assessments, notice and recourse for impacted workers, and measures to minimize worker displacement. The DOL also stated that employers should minimize electronic monitoring and avoid unnecessary collection or retention of worker data.