Evaluating security risks when working with high-risk constituents

From DevSummit
Revision as of 18:07, 5 May 2015 by Vivian (talk | contribs) (1 revision imported)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

SESSION #1

Evaluating security risks when working with high-risk constituents

Facilitated by Tomas Krag (Formerly of Refugees United)


I. Perceived Threats (PT) vs. Real Threats (RT)

a. PT is where it can go wrong; the other perception is that you’ve dealt with the risks but haven’t.

b. How to make sure you understand the threats correctly?

c. Trust is the primary issue

d. Do users have an understanding of what it means to be online?


II. Security Issues

a. Privacy

b. Cultural sensitivities

c. Censorship

d. Technical risks involved

e. Encryption

f. Database stores and personal information

g. Platforms that are at risk? Which platforms are being targeted?


III. Threat Analysis & Impact Evaluation

a. Threats are largely intangible, hard to prove the reality of it.

b. Anonymity

c. Targeted persecution


III. Communicating & Constituencies

a. Simplifying, explaining in a reasonable way. How can you explain security and privacy in a way they can relate to? (See bullet c below)

b. Evaluating usage patterns

c. Community monitors (i.e. volunteers helping refugees register safely; interesting method: pre-generated passwords on “scratch card” – very similar to how mobile prepaid works for many so it’s more normalized; they know that if someone sees what’s under the scratch portion, they have access to money, information etc.)