사용자:IIBewegung/빅 데이터 윤리

위키백과, 우리 모두의 백과사전.

빅 데이터 윤리란 디지털화된 데이터, 특히 개인 데이터에 관한 옳고 그름의 개념을 체계화하며 옹호하며 권장하는 것을 말한다. 줄여서 데이터 윤리라고도 부른다.[1] 인터넷 초창기부터 지금까지 생성되는 데이터의 양과 질은 급격히 증가하였으며, 현재는 기하급수적으로 증가하고 있다. 빅 데이터는 기존의 데이터 처리 소프트웨어로는 처리 불가능할 정도로 방대하고 복잡한 데이터를 말한다. 높은 처리량의 게놈 염기서열 분석, 고해상도 영상 구현, 의료 환자 전자 기록 및 인터넷에 연결된 의료 장치와 같은 최근 의학 연구와 의료분야의 혁신은 가까운 미래에 엑사바이트(exabyte) 범위에 도달할 데이터 홍수를 촉발시켰다. 데이터 윤리는 그 영향으로 데이터 양이 증가함에 따라 그 의의가 크다.

빅 데이터 윤리는 정보 윤리와 차이점이 있다. 정보 윤리는 도서관 사서, 기록 보관자, 정보 전문가와 관련된 지적 재산의 문제나 우려점에 초점을 두는데 반해, 빅 데이터 윤리는 데이터 브로커, 정부, 대기업 등과 같은 정형, 비정형 데이터의 수집자 및 보급자에게 더 관심을 둔다.

원칙[편집]

데이터 윤리는 다음의 원칙에 관심을 둔다.

  1. 소유권 - 데이터의 소유권은 개인에게 있다.
  2. 거래의 투명성 - 개개인의 데이터가 사용될 경우, 집계 데이터 세트를 생성하는데 사용되는 알고리즘의 설계에 그들이 투명하게 접근할 수 있어야 한다.
  3. 동의 - 개인 또는 법적 기관이 개인 데이터를 사용하고자 하는 경우, 개인 데이터가 누구에게, 언제, 어떤 목적으로 이동하는지 정보를 제공하고 명시적인 동의를 받아야 한다.
  4. 사생활 존중 - 데이터 거래가 일어날 경우, 사생활 보호를 위해 모든 합리적인 노력이 가해져야 한다.
  5. 최신성 - 개인 데이터가 사용되는 금융 거래의 발생 즉시 발생 사실과 그 규모가 데이터의 주체에게 통지되어야 한다.
  6. 개방성 - 집계 데이터 세트는 누구나 자유롭게 사용할 수 있어야 한다.

소유권[편집]

Who owns data? Ownership involves determining rights and duties over property. The concept of data ownership is linked to one's ability to exercise control over and limit the sharing of their own data. If one person records their observations on another person who owns those observations? The observer or the observed? What responsibilities do the observer and the observed have in relation to each other? Since the massive scale and systematisation of observation of people and their thoughts as a result of the Internet, these questions are increasingly important to address. Slavery, the ownership of a person, is outlawed in all recognised countries. The question of personal data ownership falls into an unknown territory in between corporate ownership, intellectual property, and slavery. Who owns a digital identity?

European laws, the General Data Protection Regulation, indicate that individuals own their own personal data.

Personal data refers to data sets describing a person ranging from physical attributes to their preferences and behaviour. Examples of personal data include: Genome data, GPS location, written communication, spoken communication, lists of contacts, internet browsing habits, financial transactions, supermarket spending, tax payments, criminal record, laptop and mobile phone camera lens recording, device microphone recordings, driving habits via car trackers, mobile and health records, fitness activity, nutrition, substance use, heartbeat, sleep patterns and other vital signs. The collective of one individual's personal data forms a digital identity (or perhaps digital alter ego is more fitting). A digital identity encompasses all of our personal data shadowing, representing and connected to our physical and ideological self. The distinction between data categories is not always clear cut. For example, health data and banking data are intertwined because behaviour and lifestyle can be inferred through banking data and is hugely valuable for predicting risk of chronic disease. Therefore, banking data is also health data. Health data can indicate how much an individual spends on healthcare, therefore health data is also banking data. These overlaps exist in between other data categories too, for example, location data, Internet browsing data, tax data are essentially all about individuals.

The protection of the moral rights of an individual is based on the view that personal data is a direct expression of the individual’s personality: the moral rights are therefore personal to the individual, and cannot be transferred to another person except by testament when the individual dies. Moral rights includes the right to be identified as the source of the data and the right to object to any distortion or mutilation of the data which would be prejudicial to his or her honour or reputation. These moral rights to personal data are perpetual.

A key component of personal data ownership is unique and controlled access i.e. exclusivity. Ownership implies exclusivity, particularly with abstract concepts like ideas or data points. It is not enough to simply have a copy of your own data. Others should be restricted in their access to what is yours. Knowing what data others keep is a near impossible task. The simpler approach would be to cloak yourself in nonsense. To ensure that corporations or institutions do not have a copy of your data it is possible to send noise to confuse the data that they have. For example, a robot could randomly search terms that you would not be inclined to usually search for making that data obtained by the search engine useless through confusion (see: Track Me Not by New York University).

Ownership puts emphasis on the ability to conveniently move data from one service to another i.e. portability. When personal data is owned by the individual they have the option to simply remove it and take it to another site if they become dissatisfied with the service. Individuals should be offered a high degree of convenient portability allowing one to switch to alternatives without losing historic data collections describing product preferences and personal conversations. For example, one may choose to switch to an alternative messaging app, and this should be possible without losing the record of your previous conversations and contacts. Giving individuals the option to switch services without the inconveniences of losing historical data means that the services need to keep customers happy by providing good services rather than locking them in by means of incompatibility with alternatives.

For portability, data expression must be standardised in such a way that this can happen seamlessly. For example, describing the unit as “kilograms” rather than “kg” means that robots recognise them as different, although they are the same. These small variations can result in messy data that cannot easily be combined or transferred into a new system which cannot recognise them. Currently, Apple states that they provide privacy services, however, it is difficult to extract data from Apple systems making it difficult to migrate to an alternative. In the PDT framework, the data expression would be standardised for easy portability with the click of a button. Standardisation would also facilitate the setting up of mechanisms to clean data necessary to install checks and balances validating the quality of the data. By joining multiple sources, you would be able to identify erroneous or falsely entered data.

Who owns data today? Today data is being controlled, and therefore owned by the owner of the sensors. The individual making the recording or the entity owning the sensor controls what happens to that data by default. For example, banks control banking data, researchers control research data, and hospitals control health record data. Due to historical reasons the current scenario is such that research institutions hold data about a fragment of data describing part of an individual. Health research data in Europe exist in a fragmented manner controlled by different institutions. Data categories often describe more about who controls that data and where it is stored rather than what the data is describing or the application it could be applied to. While the Internet is not owned by anyone, corporations have come to control much of the personal data, creating value by making use of data collection, search engines and communication tools.[2] By default, as a side effect to owning the intellectual property making up the Internet tools, these corporations have been collecting our digital identities as raw material for the services delivered to other companies at a profit. Most of the data collected via the Internet services is personal data describing individuals. Traditionally, medicine organises data around the individual because it enables an understanding of health. When studying epidemiology, the data of groups is still organised around the individual. Many of the processes that are being made more efficient concern individuals and group dynamics. However, data is not necessarily organised around the individual, rather, data is being controlled by the owner of the sensors.

In China, the government largely owns data. In one Chinese province data was used to generate a social index score per person based on online and offline individual behaviour, such as jay walking and amount of toilet paper used in a public lavatory. The social index determines access to particular public services.

거래의 투명성[편집]

Concerns have been raised around how biases can be integrated into algorithm design resulting in systematic oppression.[3] The algorithm design should be transparently disclosed. All reasonable efforts should be made to take into account the differences between individuals and groups, without losing sight of equality. Algorithm design needs to be inclusive.

In terms of governance, big data ethics is concerned with which types of inferences and predictions should be made using big data technologies such as algorithms.[4]

Anticipatory governance is the practice of using predictive analytics to assess possible future behaviours.[5] This has ethical implications because it affords the ability to target particular groups and places which can encourage prejudice and discrimination[5] For example, predictive policing highlights certain groups or neighbourhoods which should be watched more closely than others which leads to more sanctions in these areas, and closer surveillance for those who fit the same profiles as those who are sanctioned.[2]

The term "control creep" refers to data that has been generated with a particular purpose in mind but which is repurposed.[5] This practice is seen with airline industry data which has been repurposed for profiling and managing security risks at airports.[5]

In regard to personal data, the individual has the right to know:

  1. Why the data is being collected?
  2. How it is going to be used?
  3. How long it will be stored?
  4. How it can be amended by the individual concerned?

Examples of ethical uses of data transaction include:

  • Statutory purposes: All collection and use of personal data by the state should be completely transparent and covered by a formal license negotiated prior to any data collection. This civil contract between the individual and the responsible authorities sets out the conditions under which the individual licenses the use of his/her data to responsible authorities, in accordance with the above transparency principles
  • Social purposes: All uses of individual data for social purposes should be opt-in, not opt-out. They should comply with the transparency principles.
  • Crime: For crime prevention an explicit set of general principles for the harvesting and use of persona data should be established and widely publicised. The governing body of the state should consider and approve these principles.
  • Commerce: Personal data used for commercial purposes belongs to the individual and may not be used without a license from the individual setting out all permitted uses. This includes data collected from all websites, page visits, transfers from site to site, and other Internet activity. Individuals have the right to decide how and where and if their personal data is used for commercial purposes, on a case-by-case or category basis.
  • Research: personal data used for research purposes belongs to the individual and must be licensed from the user under the terms of a personal consent form which fulfils all the transparency principles outlined above.
  • Extra-legal purposes: Personal data can only be used for extra-legal purposes with the explicit prior consent of the rights holder.

동의[편집]

If an individual or legal entity would like to use personal data, one needs informed and explicitly expressed consent of what personal data moves to whom, when, and for what purpose from the owner of the data. The owner of the information has the right to know how their data has been used.

The data transaction cannot be used as a bargaining chip for an unrelated or superfluous issue of consent, for example, improve marketing recommendations while you are trying to ring your mother. While there are services where you need to share data, these transactions should not be exaggerated and should be held within context. For example, an individual needs to share data to receive adequate medical recommendations, however, that medical data does not automatically need to go to a health insurance provider. It is ultimately come down upon the individual to make the decision about their data. These are separate data transactions which should be dealt with as such. Implied consent of accepting the transfer of data ownership because you use a chat application is not considered valid.

The full scope and extent of the transaction needs to be explicitly detailed to the individual who has to be given a reasonable opportunity to engage in the process of evaluating whether they would like to engage. Timing is critical i.e. these issues should be dealt with in a calm moment with time to reflect, not in the moment you want to buy a train ticket or are experiencing a medial emergency.

The permission needs to be given in a format which is explicit, not implied. Just because you chose an application to chat with your partner does not mean that this app needs access to your entire list of contacts. The button which you click to give permission should not be designed in such a way that the automatic behaviour is opting in. For example, in binary choices if one button is smaller than the other, or if one button is hidden in the design and the other jumps out at you, or if one button requires multiple clicks whereas the other is a single click.

While a person could give consent on a general topic to be continuous, it should always be possible to retract that permission for future transactions. Similarly, to consent for sexual activity, retraction of past consent for data transactions is not feasible. For example, it would be possible for an individual to give consent to use their personal data for any cause advancing the treatment of cardiovascular disease until further notice. Until the human changes their mind, these transactions can continue to occur seamlessly without the involvement of the human.

사생활 존중[편집]

If data transactions occur all reasonable effort needs to be made to preserve privacy.

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” - United Nations Declaration of Human Rights Article 12.

Why does privacy matter? Data is useful to make systems more efficient; however, defining the end goal of this efficiency is essential in assessing how ethical data usage is.

The use of data monitoring by government to observe citizens needs explicit authorization by appropriate judicial process. Possibly it would even be more efficient to observe the relatively small number of criminals manually rather than track the relatively large population. Blanket observation of inhabitants by national governments and corporations is a slippery slope to an Orwellian style of governance. Privacy is a not about keeping secrets, it is about choice, human rights, freedom, and liberty. For example, sharing your medical data with your doctor under the understanding that it will be used to improve your health is ethically sound, even when the doctor reveals that data to another doctor. However, when that same data is shared with a marketing agency as just happened with the British national health system and Google’s DeepMind artificial intelligence company the ethical implications are more uncertain (Google DeepMind and healthcare in an age of algorithms by Julia Powles and Hal Hodson). Privacy is about choosing the context; what data you share, with who, for which purpose, when. Privacy is currently not being implemented possibly because the personal power and wealth gain from not doing so is acting as a disincentive for both private companies and governments. Also, using data to measure actual social impact could reveal inefficiency which would be inconvenient to the politicians involved or the companies’ claims.

The public debate on privacy is often unfairly obscured to an over-simplistic binary choice between privacy and scientific progress. The marketing campaigns have even dismissed critics of centralized data collection as resisting progress and holding on to the past. However, the benefits from scientific progress through data can be achieved in a manner consistent with privacy values as has historically been the case in epidemiological research. The extraction of value from data without compromising identity privacy is certainly possible technologically; e.g., by utilizing homomorphic encryption and algorithmic design which makes reverse engineering difficult.

Homomorphic encryption allows the chaining together of different services without exposing the data to each of the services. Even the software engineers working on the software would not be able to override the user. Homomorphic encryption schemes are malleable by design meaning they can be used in a cloud computing environment while ensuring the confidentiality of processed data. The technique allows analytical computations to be carried out on cipher text, therefore generating encrypted results which, when decrypted, match the results of operations performed in plain-text.

The results of analytics can be presented in such a way as to be fit for purpose without compromising identity privacy. For example, a data sale stating that “20% of Amsterdam eats muesli for breakfast” would transmit the analytical value of data without compromising privacy, whereas saying that “Ana eats muesli for breakfast” would not maintain privacy. Algorithmic design and the size of the sample group is critical to minimize the capacity to reverse engineer statistics and track targeted individuals. One technical solution to reverse engineering of aggregate metrics is to introduce fake data points that are about made up people which do not alter the end result, for example the percentage of a group that eats muesli.

Privacy has been presented as a limitation to data usage which could also be considered unethical.[6] For example, the sharing of healthcare data can shed light on the causes of diseases, the effects of treatments, an can allow for tailored analyses based on individuals' needs.[6] This is of ethical significance in the big data ethics field because while many value privacy, the affordances of data sharing are also quite valuable, although they may contradict one's conception of privacy. Attitudes against data sharing may be based in a perceived loss of control over data and a fear of the exploitation of personal data.[6] However, it is possible to extract the value of data without compromising privacy.

Some scholars such as Jonathan H. King and Neil M. Richards are redefining the traditional meaning of privacy, and others to question whether or not privacy still exists.[4] In a 2014 article for the Wake Forest Law Review, King and Richard argue that privacy in the digital age can be understood not in terms of secrecy but in term of regulations which govern and control the use of personal information.[4] In the European Union, the Right to be Forgotten entitles EU countries to force the removal or de-linking of personal data from databases at an individual's request if the information is deemed irrelevant or out of date.[7] According to Andrew Hoskins, this law demonstrates the moral panic of EU members over the perceived loss of privacy and the ability to govern personal data in the digital age.[8] In the United States, citizens have the right to delete voluntarily submitted data.[7] This is very different from the Right to be Forgotten because much of the data produced using big data technologies and platforms are not voluntarily submitted.[7]

최신성[편집]

The business models driving tech giants have uncovered the possibility of making the human identity the product to be consumed. While the tech services including search engines, communication channels and maps are provided for free, the new currency that has been uncovered in the process is personal data.

There is a variety of opinion around if it is ethical to receive money in exchange for having access to your personal data. Parallels have been drawn between blood donations, where the rate of infectious blood donated decreases when there is no financial transaction for the blood donor. Additional questions arise around who should receive the profit from a data transaction?

How Much is Data Worth?[편집]

What is the exchange rate of personal data to money? Data is valuable because it allows you to act more efficiently than when you are guessing or operating using trial and error. There are two elements of data that have value: trends and real-time. Build-up of historical data allows us to make future predictions based on trends. Real-time data gives value because you can act instantaneously.

How much are tech services such as a search engine, a communications channel and a digital map actually worth, for example in dollars? The difference in value between the services facilitated by tech companies and the equity value of these tech companies is the difference in the exchange rate offered to the citizen and the 'market rate' of the value of their data. Scientifically there are many holes to be picked in this rudimentary calculation: the financial figures of tax evading companies are unreliable, would revenue or profit be more appropriate, how do you define an active user, you need a large number of individuals for the data to be valuable, would there be a tiered price for different people in different countries, not all Google revenue is from Gmail, etc. Although these calculations are undeniably crude, the exercise serves to make the monetary value of data more tangible. Another approach is to find the data trading rates in the black market. RSA publishes a yearly cybersecurity shopping list that takes this approach.[9] The examples given only cover specific cases, but if we extend profits from data sales to other areas such as healthcare the monthly profit per individual would increase.

This raises the economic question of whether free tech services in exchange for your personal data is a worthwhile implicit exchange for the consumer. In the personal data trading model, rather than companies selling your data, you as an owner can sell your personal data and keep the profit.[10] Personal Data Trading (PDT) is a framework that gives individuals the ability to own their digital identity and create granular data sharing agreements via the Internet. Rather than the current model which tolerates companies selling personal data for profit, in PDT, individuals would consciously sell their personal data to known parties of their choice and keep the profit. At the core is an effort to re- decentralise the Internet.At the core is an effort to re-decentralise the Internet. Rather than the current model which tolerates companies selling personal data for profit, in PDT, individual human beings would directly own and consciously sell their personal data to known parties of their choice and keep the profit.PDT adds a fourth mechanism for wealth distribution, the other three being salaries via jobs, property ownership, and company ownership.The ultimate goals of the Personal Data Trading (PTD) model are: More equitable global resource distribution and a more balanced say in allocation of global resources. Personal data trading by individuals in the proposed framework would result in distributed profits amongst the population but also can have radical consequences on societal power structures. It is now widely acknowledged that the current centralised data design exacerbates ideological echo chambers and has far reaching implications on seemingly unrelated decision-making processes such as elections. The data exchange rate is not only monetary, it is ideological. Do institutional processes have to be compromised by the centralised use of communication tools guided by freely harvested personal data?

While initially it is realistic to assume that data would be traded for money, it is possible to imagine a future where data would be traded for data. The ‘I’ll show you yours if you show my mine’ scenario could replace money altogether. Importantly, this is a future scenario and the first step is to focus on exchanging personal data for existing monetary currency.

개방성[편집]

The idea of open data is centred around the argument that data should be freely available and should not have restrictions that would prohibit its use, such as copyright laws. Many governments have began to move towards publishing open datasets for the purpose of transparency and accountability.[11] This movement has gained traction via "open data activists" who have called for governments to make datasets available to allow citizens to themselves extract meaning from the data and perform checks and balances themselves.[11][4] King and Richards have argued that this call for transparency includes a tension between openness and secrecy.[4]

Activists and scholars have also argued that because this open-sourced model of data evaluation is based on voluntary participation, the availability of open datasets has a democratizing effect on a society, allowing any citizen to participate.[12] To some, the availability of certain types of data is seen as a right and an essential part of a citizen's agency.[12]

The Open Knowledge Foundation (OKF) lists several dataset types that should be provided by governments in order for them to truly be open.[13] The OFK has a tool called The Global Open Data Index (GODI) which is a crowd-sourced survey for measuring the openness of governments,[13] according to the Open Definition. The aim of the GODI is to provide a tool for providing important feedback to governments about the quality of their open datasets.[14]

Willingness to share data varies from person to person. Preliminary studies have been conducted into the determinants of the willingness to share data. For example, some have suggested that baby boomers are less willing to share data than millennials.[15]

Personal Data of Children[편집]

Parents or guardians of minors have responsibility for their children's data.Although parents or guardians of minors below the age of 18 have responsibility of for children’s data, they cannot transact in their child's’ data in exchange for money. Rather, data transactions can only be donations, which opens the possibility to using child data for contexts such as public healthcare and education.

기관별 역할[편집]

국가[편집]

Data sovereignty refers to a government's control over the data that is generated and collected within a country.[16] The issue of data sovereignty was heightened when Edward Snowden leaked US government information about a number of governments and individuals whom the US government was spying on.[16] This prompted many governments to reconsider their approach to data sovereignty and the security of their citizens' data.[16]

J. De Jong-Chen points out how the restriction of data flow can hinder scientific discovery, to the disadvantage of many but particularly, developing countries.[16] This is of considerable concern to big data ethics because of the tension between the two important issues of cybersecurity and global development.

은행[편집]

The banks hold a position in society as the keeper of value. Their data policy should not compromise the trust relationship with their clients as keeper of value. For example, in a bank shares data about one butcher with another butcher, this could compromise their trust relationship due to the revelation of data to competitors.

Relevant News Items about Data Ethics[편집]

The Edward Snowden revelations on June 5, 2013 marked a turning point in the data ethics public debate. The ongoing publication of leaked documents has revealed previously unknown details of global surveillance apparatus run by the United States NSA in close cooperation with three of its Five Eyes partners: Australia's ASD, the UK's GCHQ, and Canada's CSEC.

In the Netherlands, ING Bank made a public statement about their intentions around data usage.

The Facebook-Cambridge Analytica data scandal involves the collection of personal data of up to but most possibly more than 87 million Facebook users in an attempt to influence voter opinion. Both the 2016 Brexit vote and the 2015/6 campaigns of US politicians Donald Trump and Ted Cruz paid Cambridge Analytica to use information from the data breach to influence voter opinion.

Relevant Legislation about Data Ethics[편집]

On the 26th October 2001 the Patriot Act came into force in the USA, in response to the broad concern felt among Americans from the September 11 attacks. Broadly speaking the Patriot Act laid the path for allowing security forces to surveil citizens suspected of involvement with terrorist acts.

On the 25th May 2018 the General Data Protection Regulation 2016/679 (GDPR) came into effect across the European Union. GDPR addresses issues of transparency from data controllers towards individuals, referred to as data subjects, and a need for permission from data subjects to handle their personal data.

성명서, 선언문, 연합 단체[편집]

There are several manifestos concerning data ethics collecting signatures from supporters.

Name of Data Manifesto, Declaration, or Union Description Main Authors, Editors or Sponsors
Mis Datos Son Mios Consumer rights data union Organización de Consumidores y Usuarios
Datavakbond A data union Led by member of European Parliament
Tada A manifesto adopted by the city of Amsterdam Amsterdam Economic Board
Data Leaders Unknown Unknown
User Data Manifesto Unknown Unknown
MyData 3 principle authors who wrote the declaration as a first step towards building a community for the conference MyData Conference Organisers
Data Ops Manifesto Unknown Unknown
Royal Statistics Society Data Manifesto Unknown Royal Statistics Society

함께 보기[편집]

각주[편집]

  1. Kitchin, Rob (2014년 8월 18일). 《The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences》 (영어). SAGE. 27쪽. ISBN 9781473908253. 
  2. Zwitter, A. (2014). “Big Data Ethics”. 《Big Data & Society》 1: 4. 
  3. O'Neil, Cathy (2016). 《Weapons of Math Destruction》 (영어). Crown Books. ISBN 978-0553418811. 
  4. Richards and King, N. M. and J. H. (2014). “Big data ethics”. 《Wake Forest Law Review》 49: 393–432. SSRN 2384174. 
  5. Kitchin, Rob (2014). 《The Data Revolution: Big Data, Open Data Infrastructure and Their Consequences》. SAGE Publications. 178–179쪽. 
  6. Kostkova, Patty; Brewer, Helen; de Lusignan, Simon; Fottrell, Edward; Goldacre, Ben; Hart, Graham; Koczan, Phil; Knight, Peter; Marsolier, Corinne (2016). “Who Owns the Data? Open Data for Healthcare”. 《Frontiers in Public Health》 (영어) 4: 7. doi:10.3389/fpubh.2016.00007. ISSN 2296-2565. PMC 4756607. PMID 26925395. 
  7. Walker, R. K. (2012). “The Right to be Forgotten”. 《Hastings Law Journal》 64: 257–261. 
  8. Hoskins, Andrew (2014년 11월 4일). “Digital Memory Studies |”. 《memorystudies-frankfurt.com》. 2017년 11월 28일에 확인함. 
  9. RSA (2018). “2018 Cybersecurity Shopping List” (PDF) (영어). 
  10. László, Mitzi (2017년 11월 1일). “Personal Data trading Application to the New Shape Prize of the Global Challenges Foundation” (영어). online: Global Challenges Foundation. 27면. 
  11. Kalin, I. (2014). “Open data policy improves democracy”. 《The SAIS Review of International Affairs》 34: 59–70. doi:10.1353/sais.2014.0006. 
  12. Baack, S. (2015). “Datafication and Empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism.” (PDF). 《Big Data & Society》 2: 1–2. 
  13. Knowledge, Open. “Methodology - Global Open Data Index”. 《index.okfn.org》. 2017년 11월 23일에 확인함. 
  14. Knowledge, Open. “About - Global Open Data Index”. 《index.okfn.org》. 2017년 11월 23일에 확인함. 
  15. Emerce. “Babyboomers willen gegevens niet delen”. 《emerce.nl》. 2016년 5월 12일에 확인함. 
  16. de Jong-Chen, J. (2015). “Data Sovereignty, Cybersecurity, and Challenges for Globalization”. 《Georgetown Journal of International Affairs》: 112–115. 

참고 문헌[편집]

  • Baack, S. (2015). Datafication and Empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism. Big Data & Society, 2(2) 1-11. doi: 10.1177/2053951715594634
  • de Jong-Chen, J. (2015). Data sovereignty, cybersecurity, and challenges for globalization. Georgetown Journal of International Affairs, , 112-122. https://search.proquest.com/docview/1832800533?
  • Hoskins, A. (November 4, 2014). "Digital Memory Studies". www.memorystudies-frankfurt.com. Retrieved 2017-11-28.
  • Kalin, I. (2014). Open data policy improves democracy. The SAIS Review of International Affairs, 34(1), 59-70. Retrieved from https://search.proquest.com/docview/1552151732
  • Kitchin, R. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences, (pp. 165–183). SAGE Publications. Kindle Edition.
  • Kostkova, P., Brewer, H.,de Lusignan, S., Fottrell, E., Goldacre, B., Hart, G., Phil, P., Knight, P., Marsolier, C., McKendry, R. A., Ross, E. Sasse, A., Sullivan, R., Chaytor, S., Stevenson, O., Velho, R. & Tooke, J. Who Owns the Data? Open Data for Healthcare. Frontiers in Public Health, 4(7). doi: 10.3389/fpubh.2016.00007
  • Richards, N. M.; King, J. H. (2014). Big data ethics. Wake Forest Law Review 49(2), 393-432. https://heinonline.org/HOL/P?h=hein.journals/wflr49&i=405
  • Walker, R. K., (2012). The Right to be Forgotten. Hastings Law Journal, 64(1), 257-[vii]. https://heinonline.org/HOL/P?h=hein.journals/hastlj64&i=269
  • Zwitter, A. (2014). Big Data Ethics. Big Data & Society, 1 (2): 1-6. doi: 10.1177/2053951714559253
  • (7 July 2018). What if people were paid for their data? The Economist. https://www.economist.com/the-world-if/2018/07/07/what-if-people-were-paid-for-their-data
  • Kruse, C. S., Goswamy, R., Raval, Y., & Marawi, S. (2016). Challenges and Opportunities of Big Data in Health Care: A Systematic Review. JMIR medical informatics, 4(4), e38. doi:10.2196/medinform.5359