Security First Conversations
Reason for session: To get a sense of what questions we want to explore as sessions about digital security at the conference.
We went over digital security questions we want answered over the next few days:
People expressed that they’re Interested in: * what are specific technologies people use? Can we help folks understand the landscape? * internet shutdown context - what materials might be useful for people preparing for or affected by a shutdown? * What are useful open source, privacy tools for human rights tools and journalists? * What are ways encryption technologies can make you less secure, and alternatives to using encryption tools (+1 — how can we think about “human encryption”)? * How we can adapt security recommendations for people with limited resources and tools (e.g. — “digital security for low bandwidth or no Internet”)? * What’s important for us for mobile apps — it’s an important thing for us to understand permissions on mobile apps (contacts, location, etc) — is it possible to become more aware of what types of apps are safe for us to download in terms of security, and can we advocate for apps that promote security?
Our questions: How can you still be secure in your communications when things break apart? What’s the discussion around ethics around digital security? How do we determine authenticity? How do we know we’re communicating with who we think we are? Are there tool-based questions we have? If anyone can share from your experience what is being used for tools for security? Learning more about what each tool can be use? How trainers and teachers can share the examples they’re facing with frontline groups? How can we share those examples?
Identifying some common themes:
- Tools and contexts
- Unpack more questions
We then shared stories as a way of talking about more questions.
Story: Working in SouthEast Asia with LGBTQI and women communities, and working with a range of movements, ranging from high-bandwidth data savvy groups to situations where the Internet is not reliable. Specific example: working with sex workers in a specific country — the session was around what to do if your phone was hacked, and what you can do about it. Within the training, splitting groups: Scenario 1: iPhone device Scenario 2: Android device Two people were following the iPhone advice. In one country, you can go to the telco to check your phone (only two telcos). Q: “But which telco should I reach out to?” A: “Whichever one you have.” Q: “But I have a dual-SIM iPhone.” Surprise: But her phone actually was an android phone that looks like iPhone!
A lot of activists are using devices that don’t have authentic software and don’t get updates. These are interesting moments in a training where you’re informing someone that the device they’re actually using was a different phone.
What about when technologies fail? Takeaway: Having language for when people are feeling at risk. Whenever people are having a protest, the government would have a shutdown in the days before. Developed a strategy where one person stays at home, while one person goes to a protest to help with mitigating against risk. Longer-term work: Having a one day meeting with them where they had code words. Using example: “I’m a pink eating [X]” to indicate “I’m at a protest and the police are close by.” They came up with safety words. “Pink - I’m at the rally; Blue - I’m at home” “I’m eating burritos - The police are close by; I’m eating pizza- everything’s okay.”
Instead of moving the encryption to tools, we pushed them to words at specific moments. It worked for a few months. When more people came in to the network, the onboarding of those words became much more difficult for everyone.
Story: In the context I’m in, there’s a thing that permeates: “security culture”. Totally slows down boarding but stopping the protest. Question of whether it’s counter-productive or even adversarial. A lot of activists that we talk to feel that the encryption is safe, but if someone is recording the messages, maybe they don’t get into the messages now, but they maybe want to identify an emerging leader in that group, maybe they decide to put more computing resources to decrypt the messages, and that can be a big problem. — Story: Generally a lot of computers are using unofficial licenses. After some years, unofficial Windows are not passed anymore. It’s a big security problem. A lot of organizations we help are using unpatched software.
Story: In person training: “Can’t find my email on Android” — turned out it was an Android. In that situation, told the person the truth that it wasn’t an original Android, and explaining why it doesn’t have the app.
Story: How to report around corruption. They thought they were secure on Telegram, because they thought they had a secure channel. They didn’t account for the group being infiltrated. Didn’t account for people being of two minds of the prime minister. At first it was running well. It was when leaders became very active, because the messages became very valuable. Someone started screens hotting the messages on Telegram. They only found out about it because Facebook Groups were formed. It broke the momentum of what they were doing.
We talked about baseline security, aka common themes that folks talk about before giving specific tool recommendations, for example; Software updates, authorized operating systems Passwords How the Internet works and knowing when something is unencrypted, and different types of encryption Other one that’s covered: Email encryption — thank goodness for Protonmail! Has made things a lot easier to use than GPG/PGP, which is a software that required four hour trainings to go over. Browsing is another big one!
Different context for baselines: Encryption Data protection Passwords Sometimes during a training, people say they don’t need it!
So many things are contextual — it can be hard to determine and make a judgement around appropriate software to pick.
Other things: Who can help an organization when it’s being attacked, e.g. AccessNow and CiviCert and CitizenLab and CitizenClinic and EFF’s Threat Lab— but there are very few actors who can make assessments on what type of malware is being used and determining the actor. There’s an increasing demand for malware analysis skills. Some groups are overloaded with requests, e.g. CitizenLab is overworked. The content produced by these groups is helpful as proof, and has a major news impact that has tangible security changes to these products.
Story: An organization can ask for tech help and the technologists assisting can help on the first or second level, but doing forensic work or attribution on malware analysis is something often beyond.
What’s been effective: looking at digital security from a data management perspective and publishing it. Following this basic model of “how do you gather your data, what are the secure ways you do that, are you using secure platforms for your questionnaires” (e.g. people using Facebook to gather that information) —> looking at it from a research process.
Other questions we identified: Question of: How are you applying digital security concepts to the communities you’re working with? (e.g .working with journalists: “How do you gather your data from sources, etc .. how do you publish it?” Identifying what’s different between a regional reporter and a national reporter.) Question: “is operational safety slowing down activism?” Discussion to unpack Question: “Can we talk about time in relation to activism? Like if we had more time to slow down and train people, versus if we have just one day.” Range of time around trainings: 3-5 days, or 2 days + gathering details on an organization like via a survey + an audit Question: “How do we get second-level support for our work? Remote second-level support — very technical second-level support but also if there are only two people in a city doing digital security, how do those people get support within their location?”
Question: Security around technologies and transactions? When we get our donations, they’re through stripe through our CRM, so something around transactions and digital security. Or blockchain in terms of how it can maybe help nonprofits? Not sure. More about the security of the data you’re getting for your organization. Contacts, database, donations themselves.
Question: How do you make CiviCRM more secure? Further considerations: How the data gets stored, how that data gets sent, how the data gets backed up, where you’re hosting it, if you’re responsible for your own encryption, a lot of volunteers may see that data for data entry, and they may have access to data they didn’t access themselves. Most of the time you’ll go to another organization to do that, but how do you know that they have the judgement to do that correctly?
Question; How secure is Amazon Web Services? Further considerations: It’s hard for an organization that isn’t specialized in security to determine how secure they are. Example: hiring people to do an audit and maybe another external audit to check each other’s work, but a challenge to know how good they are.
Question: How do you manage the expectation that an audit can work for this specific time, and also balance that with the need for an organization to be able to do its work. Further considerations: Maybe there’s an alternative to some of these security considerations — maybe it’s transparency.