AI and ethics
AI policies session Concerns/Risks
- Fuels technodeterminism
- Data -- safety and security
- Trained models-- customized for who?
- misiniformation
- deification
- organizational indecisions
- Labor + paid work
- GDPR/US regional divides
- Low budget/losw staff -- how to keep up
- Trusting AI to do everything -- buzzword compatability
- Invisibility of human impacts
- slop inundating the Internet, appearing authoritative
- algorithmic bias -- gen AI not conducive for the non-binariness of being
- labor exploitation
- what are the regulations?
- will AI drive further productivity mindset
- who decides and regulates
- TOO MUCH CONTENT
- Environmental impact
Benefits
- SLLRP -- Surveilled and Laundered Labor and Relationships for Profit
- Lower budget alternatives
- Efficiencies
- Productivities
- Summaries & Admin
- Accessibility - moderating
- R&D and pivotal point
- Researchers in journalism -- OS identification and verification possibilities
- Handling volume of information
- categorizing of documents
- possibility of bottom up participatory systems
- Grant writing
Facial recognition is deployed broadly, so anything in public you need policy governing that
- It's already in use, for example Madison Square Garden
- YOU CAN OPT OUT IN MANY CASES (including the airport)
Get info offshore. Meta systems are not safe from ingestion, it's written out in project 2025. Governments taking in more and more data. More and more data systems you have, your funders might want to take it in.
"We use X for Y purpose" is good policy
- Attribution is important too -- tell people when you used it and how.
- Big media companies like PBS are still figuring it out, station by station.
"Don't use our Intellectual Property or personal data in AI tools" often makes sense. Think about consent driven approaches.
Generative systems can be used to anonymize stories -- great! -- but agree on how you do it, get consent.
How often do these policies need to be revisited?
- At least once a year, board and tech team should get aligned.
Culture jamming is happening with AI.
AI in public health -- use with HIPAA protected data. Where is this? Realistically they will have to rewrite it and then again in 5 years
Is the risk just data being ingested into training sets?
- AI companies are data brokers -- check to be sure they can't sell your data or share it otherwise even if they aren't training the model with it?
Are boards asking these questions? Not much. It's overwhelming.
A public option AI is something that could happen -- which would include opt-ins and the like. There was a session on this topic this week.
Who are the official regulators in this space? There is galvanizing by the corporate leaders
Mozilla has done a great job of making its own policy. Is not extending into training boards, etc.
SynthID is a key tool for verifying information from Google's DeepMind Project -- may help you identify the source of something, including which generative tool created something.
A theme here is overwhelm! If you are looking for tools, there are websites you can find options where you can own your own data. https://futuretools.io
Have a good working group internally so you get input and buy-in before taking to the board