Category Archives: privacy

Three-Layer Privacy Responsibility Framework

Sarah Spiekermann and Lorrie Faith Cranor in their work “Engineering Privacy” state that software engineers have a major responsibility when it comes to developing privacy-friendly systems “because they are the ones devising the technical architecture and creating the code”. They present the three-layer model of user privacy concerns and responsibility framework. Based on this model they elaborate a set of guidelines, categorising them in “privacy-by-policy” and “privacy-by-architecture”

Three-Layer Privacy Responsibility Framework

The authors distinguish from three spheres of privacy: User Sphere (constrained to the user environment, i.e. laptop, mobile phone, integrated systems etc), Recipient Sphere (company centric sphere involving their back-ends infrastructure) and Joint Sphere (related to companies that host users information, like email or facebook). For each of the privacy layers, the following table describes where is the data stored, what is the responsibility of the engineer and what are the issues that they need to face.

 

Privacy Spheres Where Data is Stored Engineer’s Responsibility Engineering issues

User Sphere

Users’ desktop personal computers, laptops, mobile phones, RFID chips

  • Give users control over access to themselves (in terms of access to data and attention)
  • What data is transferred from the client to a data recipient?
  • Is the user explicitly involved in the transfer?
  • Is the user aware of remote and/or local application storing data on his system?
  • Is data storage transient or persistent?

Joint Sphere

Web service provider’s servers and databases

  • Give users some control over access to themselves (in terms of access to data and attention)
  • Minimize users’ future privacy risks
  • Is the user fully aware of how his data is used and can he control this?

Recipient Sphere

Any data recipients: servers and databases of network providers, service providers or other parties with whom data recipient shares data

  • Minimize users’ future privacy risks
  • What data is being shared by the data recipient with other parties?
  • Can the user expect or anticipate a transfer of his data by the recipient?
  • Is personal Data adequately secured?
  • Is data storage transient or persistent?
  • Can the processing of personal data be foreseen by the user?
  • Are there secondary uses of data that may not be foreseen by the user?
  • Is there a way to minimize processing? (e.g. by delegating some pre-processing to User Sphere)

Framework for Privacy-Friendly System Design

Spiekermann and Cranor propose a framework to develop privacy friendly systems. There is a rank of privacy levels lowest to highest that corresponds to the degree of identifiability (identified, pseudonymous, anonymous) of a user. In the cases where the user is totally identified, privacy needs to be provided by policy, while, in those cases where users are anonymous or pseudonymous, privacy can also be provided by architecture. The following table matches this attributes with the characteristics of the corresponding systems.

 

Privacy stages identifiability Approach to privacy protection Linkability of data to personal identifiers System Characteristics

0

identified

privacy by policy (notice and choice)

linked

  • unique identifiers across databases
  • contact information stored with profile information

1

pseudonymous

linkable with reasonable & automatable effort

  • no unique identifiers across databases
  • common attributes across databases
  • contact information stored separately from profile or transaction information

2

privacy by architecture

not linkable with reasonable effort

  • no unique identifiers across databases
  • no common attributes across databases
  • random identifiers
  • contact information stored separately from profile or transaction information
  • collection of long term person characteristics on a low level of granularity
  • technically enforced deletion of profile details at regular intervals

3

anonymous

unlinkable

  • no collection of contact information
  • no collection of long term person characteristics
  • k-anonymity with large value of k

References:

S. Spiekermann and L. F. Cranor, “Engineering privacy,” IEEE Transactions on software engineering, vol. 35, no. 1, pp. 67–82, 2009.

 

Privacy Facets (PriF)

Keerthi Thomas, Arosha K. Bandara, Blaine A. Price1 and Bashar Nuseibeh propose a process (Requirement distillation) and a framework (PriF) as a way to capture the privacy related requirements for a mobile application development.

Requirements distillation process

The requirements distillation process consists of three main phases: “Structuring of the Qualitative Data”, “Information Flow Modelling” and “Privacy Problem Analysis”.

Privacy Facets Framework

Privacy Facets framework

 

Structuring Qualitative Data: Use Privacy Facets (PriF) framework to structure the qualitative data. The outcome is a set of predefined codes adapted to the identification of privacy-sensitive contexts. The result of completing this phase is a set of Privacy or Threats Concerns from the users.

Information Flow Modeling: In the second phase, the  problem models of information-flows are developed. That is done based on the information-flow problem patterns, which are provided in the PriF framework. These problem models capture the way the information is created and disseminated to other users.

The privacy problem analysis: To elaborate a list of the privacy requirements, the privacy-sensitive context and its privacy threats or concerns are analysed with the information-flow models.

Privacy Facets

The Privacy Facets is a framework that provides:

  • analytical tools such as thematic codes, heuristics, facet questions and extraction rules to structure qualitative data
  • information-flow problem patterns and privacy arguments language to model privacy requirements.

To obtain those assets, the system analyst should structure the qualitative data of the system from the first phase of the process by using some heuristic based categories, for example:

  • Negative Behaviour Patterns (NBP): Situations in which the user chooses not to use an application because of privacy concerns.
  • Negative Emotional Indicators (NEI): These are keywords that indicate that the user might have some concerns about the privacy when using the application.
References:
K. Thomas, A. K. Bandara, B. A. Price, and B. Nuseibeh, “Distilling privacy requirements for mobile applications,” in Proceedings of the 36th International Conference on Software Engineering, 2014, pp. 871–882.