Computers, Information Technology, the Internet, Ethics, Society and Human Values

Philip Pecorino, Ph.D.

Queensborough Community College,  CUNY

Chapter 9 Information Technologies and Accountability

Presentation of  Issues

What does it mean to be accountable?  Responsible?  Morally responsible?

When something occurs, particularly involving human beings, we may ask for an account of what happened.  What is expected is a report or a description of how the event occurred and of what caused it to happen as it did.  When the event involves human beings it might be an occurrence involving a response of joy or pain.  It might involve some accomplishment or achievement  or it might involve some tragedy or loss.  In the account being offered there is often expected to be a description that would indicate what human beings were responsible for the outcome.  Machines or inanimate objects may have been involved and are included in the report but when an account is sought in order to identify responsibility for an event it is humans who are expected to be held responsible and not inanimate objects.  But what happens when the event was produced through a series of prior events involving computers, information networks and software or programs?  How is responsibility for an event determined in such cases?

With computer technologies it is sometimes difficult to ascertain just what human beings may be the most appropriate persons to respond to an act or event or situation so as to provide the accounting of it.  Similarly it is at times difficult to locate what individuals are to be haled as liable or legally responsible to the point of being required to pay damages or lose their freedom for a time while incarcerated.   Some situations involving computers are so complex and involve various sorts of activities with many people participating in them.  Some computer operations are global in scope.  Some involve levels of anonymity.  It is presently often the case that it is most difficult to hold any specific individual accountable for some general computer problem or calamity such as the Y2K situation.

Here is a good overview of the basic issues

READ Computing and Moral Responsibility  Stanford Encyclopedia of Philosophy 

It covers these topics:

  • 1. Responsibility
  • 2. Responsibility for Computer Use
    • 2.1 Case Studies
    • 2.2 Barriers to Responsibility
      • 2.2.1 The problem of many hands
      • 2.2.2 Bugs
      • 2.2.3 Blaming the computer
      • 2.2.4 Ownership without liability
      • 2.2.5 Poor articulation of norms
      • 2.2.6 Assumption of ethical neutrality
    • 2.3 Recommendations for Overcoming Barriers
      • 2.3.1 Ensure our understanding of responsibility is appropriate for the task at hand
      • 2.3.2 Redesign the computer system
      • 2.3.3 Clearly articulate norms
    • 2.4 Clarification of Responsibilities
  • 3. Can Computers Be Morally Responsible?
  • 4. Can Humans Responsibly Give Decision-Making Power to Computers?

What follows here  will follow from the work of Deborah Johnson in  Computer Ethics, 3rd ed (Prentice Hall: Upper Saddle River, NJ, 2001)

It has become increasingly difficult to ascertain exactly who is accountable or responsible for what consequences of the development and applications of computer technologies.  As they have grown in their number and variety and have been used around the globe and as so many users are cloaked in various forms of anonymity and as so many of the applications and the information is so easily reproduced and disseminated the situations resulting from the adoption of those technologies has become complex.  How are people to be held accountable for the undesired consequences of the applications and operations of those technologies?   It is important to clarify some basic concepts before proceeding with this presentation of the issue of accountability.

What does it mean to be accountable?

Accountability involves being held responsible for the providing of an account of what has occurred or being held as responsible for the event or held as a cause of the event.

As for the concept of responsibility there are various forms of responsibility.  And further there is a question of liability.  When a product leads to some form of harm those harmed may want to hold someone responsible for causing that harm.  Those responsible for the product are held to be liable for those results.  With products available for the mass market and made so available there is something known as strict liability.  For harm that results from the provision of a service such as some software developed for particular clients the individuals responsible for that harm are held to be negligent.  For situations that involve both a product and a service such as in the custom tailoring of software elsewhere available to the particular needs of a client then there is both strict liability and negligence present when harm results through the provision and use of those products and services.

Here are some definitions of terms.

role-responsibility

what individuals are expected to do in virtue of one of their social roles

causal responsibility

individual does (or fails to do) something that causes an untoward event

blameworthiness

an individual does something wrong and the wrong-doing led to an untoward event or circumstance.  This is a sense of being at fault or having committed or omitted an action and that being contrary to what was correct or required of the actor.  Often this results from a failure to fulfill a role responsibility.

liability imposed when the law requires damages to be covered by some agent.

strict liability

liability "without fault" – imposed when the person or company is responsible to pay damages even  though there may have been no act clearly seen or admitted as being wrong.  This is often the case with products the use of which caused harm in some way. Liability is used most often to describe how situations are treated legally.

Liability relates to holding a person or entity obliged to pay damages or compensation.

 

Negligence when individuals do not act according to some standard set for them by virtue of their profession or position and some harm results

This applies to services provided by computer professionals and technicians and programmers

Mixed Responsibility This would apply when there is a case of both a product and a service being involved as in the instance of a software product being modified for use by a client.  In this case there might occur a problem that would then involve both strict liability related to the software program and negligence related to the modifications to the program following the specifications and needs of the client.
   
   

Does the existence of many people participating in the production or application of a computer program so diffuse responsibility that there is no one person to be held accountable?

READ: The problem of many hands

 Accountability and Computer and Information Technology in Computer Ethics ch 7 by Deborah Johnson --summary by  Kimberly Beuther (CUNY,2006)
 
         In Deborah Johnson’s book, Computer Ethics, she considers the issue of accountability for different problems that occur with computer technology, and the internet. She offers insight on the issues of accountability and the Y2K problem, ISP liability, and virtual actions.
         In regards to the Y2K problem, that worried many people as the year 2000 approached, Johnson explains that it is difficult to determine who is accountable, because there were so many people involved. There were the original creators of computers, software and hardware manufacturers, programmers, and government agencies. She indicates that the inventors of computers cannot be held accountable, because they could not have known the scope to which computers would be used. There are the computer professionals who, Johnson points out, were aware of the problem, but did not act in a timely manner. Johnson also says that while the computer professionals may have seemed blameworthy, the problem was solved, and the Y2K disaster never occurred, at least not to the extent that it was expected to. So, in this regard, Johnson says that the computer professionals took the appropriate actions to keep the disaster from occurring. Johnson explains that everyone involved, those who produced, sold and used computer technology were causally responsible. She says, “the best strategy for minimizing the riskiness and unreliability of computing is 1) to make sure that role-responsibilities of all those involved are well articulated; 2) to make sure that all those involved understand the effects of their work on human beings; and 3) to hold all those liable when they fail to live up to their role-responsibilities.” (Johnson,2001)
         Johnson addresses the issue of ISP liability, by questioning whether ISPs can be responsible for the information that is posted by others on their forums and in their chat rooms. Though it would seem that the most appropriate placement of responsibility is on the individual who posts defamatory or false information, because of the anonymity of the user, it is nearly impossible to find them. Also, it would take a huge amount of resources to monitor every chat room, and if this extent of monitoring were to occur, it would raise issues concerning the impact on freedom of speech. Johnson explains that early on, ISPs and bulletin boards were being sued by individuals whose reputations were being tarnished because of defamatory statements about them posted on sites. At first courts found that ISPs could be held liable because of they had editorial control, and likened them to newspapers. However, Johnson points out that ISPs are carriers much like telephone companies. She also says that while the CDA (Communications Decency Act) was not upheld, a section of it that addresses ISP liability is still in use. Section 230 indicates that ISPs are not liable for content. They are however encouraged by users and the government to filter and censor forums.
            Finally, Johnson addresses an issue called virtual action. Virtual action is action that takes place in a virtual environment. It may seem that virtual actions are not real, because they do not occur in a physical environment, but Johnson refutes this idea, because while the action is not physical, it can still be harmful. She gives a scenario in her book of a game played by multiple users that replicates real life, for example, each player creates their own character and constructs their lives. One such user, changed code, and took control of other users characters, and virtually raped them. The question posed by Johnson is whether or not this user bears any moral responsibility. Since said action only occurred in a virtual environment, then it may appear that this user did nothing wrong. Johnson, however, says that while the behavior did not physically occur, the user was still accountable for his behavior. First, Johnson says that the user exposed the other players to violence and obscenity without their consent. We hold people liable and responsible for causing such harm through written and spoken words. Secondly, Johnson says that the user violated the rules of the game. Most importantly, this user is blameworthy because or the reaction of the other players in the game. Johnson illustrates that while this behavior took place in a virtual environment, there are still very real effects. In the case of virtual actions, Johnson says that each case needs to be analyzed to fully understand the ethical issues involved. It is disappointing that since the term virtual is used; it leads people to conclude that these actions are not real; she says that humans are still doing these actions, but using computers as the instrument.         
 
Resource: Johnson, Deborah G. 2001. “Accountability and Computer and Information Technology” pp 168-198 in Computer Ethics Third Edition. Upper Saddle River NJ; Prentice Hall

The Problem of Many Hands by Jack Freidman, CUNY, SPS, 2007

Many hands refers to the fact that computer programs require teams of people to commission, create, develop, write, test, de-bug and distribute. It includes everyone from a CEO down to a sales rep, a design engineer to a hardware expert. The problem with many hands is that it is often difficult to assess blame when things go wrong. Many articles on the subject consider many hands and even if it’s use is deliberate. For example, in a paper written by Nissenbaum in 1994, she comments, “some cynics argue that institutional structures are designed in this way precisely to avoid accountability”. She continues the somewhat cynical commentary by adding, “In some cases, it may be the result of  intentional planning, a conscious means applied by the leaders of an organization to avoid responsibility for negative outcomes, or it may be an unintended consequence of a hierarchical management in which individuals with the greatest decision-making powers are only distantly related to the causal outcome of their decisions. Whatever the reason, the upshot is that victims and those who represent them, are left without knowing at whom to point a finger.”  

Perhaps Nissenbaum’s last point is the most relevant. It really isn’t as important to know who to blame as it is to understand the difficulties caused to the victims who may suffer. If Nissenbaum is correct, and people are left without someone to blame, recovering damages done through a computer program that causes harm, or perhaps disaster may render them accidents rather than negligence. Nissenbaum points out that too, “Even when we may safely rule out intentional wrongdoing it is not easy to pinpoint causal agents who were, at the same time, negligent or reckless. As a result, we might be forced to conclude that the mishaps were merely accidental in the sense that no one can reasonably be held responsible, or to blame, for them.” 

The case she was referring to was regarding Therac-25, a programming error that led to the death of several individuals as a result of incorrect doses being administered to cancer patients. While Helen Nissenbaum is a Computer Science Senior Fellow at the Information Law Institute of  NYU School of Law, her essay deals with the moral as well as the legal side of the issue. She does talk about the lgalites by saying, “I have suggested that a number of individuals ought to have been answerable (though not in equal measure), from the machine operator who denied the possibility of burning to the software engineers to quality assurance personnel and to corporate executives. Determining their degree of responsibility would require that we investigate more fully their degree of causal responsibility, control, and fault. By preferring to view the incidents as accidents, however, we may effectively be accepting them as agentless mishaps, yielding to the smoke-screen of collective action and to a further erosion ofaccountability.” But concludes by summarizing “The general lesson to be drawn from the case of the Therac-25 is that many hands obscured accountability by diminishing in key individuals a sense of responsibility for the mishaps.”                            http://www.nyu.edu/projects/nissenbaum/papers/accountability.pdf

Whether it be the Therac-25 case or the case of the Space Shuttle Challenger killing its astronauts, computer programming can lead to disaster and by nature, it takes a team rather than an individual to create the complex programs. The Dutch Centre for Ethics and Technology also examine the complicated topic of many hands. The scope of their project is laid out here:     ”This project will therefore study the problem of many hands in R&D networks. We conceive of this problem as the moral problem of the tension between two requirements for a desirable distribution of responsibilities in R&D networks. One is that the distribution ought to be complete in the sense that for each moral issue someone is responsible. The other is that the distribution ought to be fair. The goal of the research is to contribute to the solution of the problem of many hands in R&D networks by gaining insight in how to reconcile the requirements of completeness and fairness in the distribution of responsibilities in R&D networks.                                                                 Three partial projects will be carried out to attain this goal. The first develops a notion of moral responsibility that offers good chances for achieving a fair and complete distribution of responsibilities in R&D networks. The second will apply the theory

of an existing formal model for responsibility to investigate the influence of the organizational structure of an R&D network on the achieving of a complete and fair distribution of responsibilities. Using case studies of R&D networks, the third investigates whether Rawls’ notion of wide reflective equilibrium is a good starting point for achieving a complete and fair distribution of responsibilities. The research will result in a notion of moral responsibility and in designs for network structures that may help to overcome the problem of many hands in R&D networks.” http://www.ethicsandtechnology.eu/index.php/projects/detail/moral_responsibility_in_rd_networks/. What makes this project particularly interesting is where Nissenbaum seems to use Utility in excusing possible culprits in her essay, the CET project looks at Rawl’s theories from several different perspectives in considering the ethics of many hands and its relation to programming and potential consequences. 

Finally, University of Alberta sums up Nissenbaum’s elements best,  “The Problem Of Many Hands

•     computer systems are often developed by teams  

•     the locus of decision making is often far from the most direct causal antecedant (the decision maker is often not the one who implemented the faulty decision)

•     computer systems often incorporate pre-existing code, the authors of which may be long forgotten

•     computer systems often operate within complex symbiotic relationships to the hardware they run on -- it can be hard to determine what happened

•     "We should not, however, confuse the obscuring of accountability due to collective action, with the absence of blameworthiness" -- we cannot accept "agentless mishaps" http://www.cs.ualberta.ca/~hoover/cmput300/Lectures/Professional/professional-1.htm

While there are inherent problems in team approaches to projects, and too many hands does blur the individual participation of each member, these group efforts are a fact of our society. Many projects are far to complicated and require diverse skill sets, making many hands, an unstoppable fact of life. With that being accepted, I fall to the Utilitarian theory that as long as the intentions of the team members are united behind helping society, and with the caveat that all reasonable actions are taken to insure safe results from a project, the project is an ethical one. Only in the case of deliberate malfeasance or negligence would I feel differently. In that case I would look to Kant and his prohibition of using anyone as a means to an end in considering the project unethical.

What is the problem of many hands?  by Marie Lafferty, CUNY,SPS, 2007

The problem of many hands comes with computer programming, distribution and usage because of the complex nature of the creation of software.  Beginning with a request for software often at a corporate level, writing code for large programs requires multiple programmers, often working simultaneously toward an end. Even with communication, they mostly may not know and can’t inspect individual other sections of the code. It is doubtful that anyone person has a good grasp on the details of any software package. The volume of code is simply too large. This leaves ‘no one’ looking responsible. In addition, the software is distributed and purchased, adding even more people to the mix, as users with no or some experience now have a powerful program in their hands to do with what they will. So, in a sense, no one has control, and without control, pinpointing responsibility is difficult and problematic. There are some who would simply let the issue rest by the statement that ‘it’s the computer’s faulty, or the computer didn’t work” Any logical thinker must realize that assessing blame to a series of numbers or electronic circuit boards is meaningless: every action of a computer is at the source a human action.  So, that the hardware itself also has the same problem of many hands. Trying to assess responsibility becomes even more obscure. “As Gotterbarn (2001) notes, this problem is due in part to a malpractice model of responsibility which looks to assign blame and mete out punishment, and is due in part to an individualistic model of responsibility which looks to assign responsibility to one person. This latter, individualistic model is inadequate for complex computer systems in particular for a number of reasons.” [i][i]  The information age demands that we create a new model.


[i][i] http://plato.stanford.edu/entries/computing-responsibility/#2.2.1

READ: The Invisibility Factor by James Moor

If society as a whole and institutions and corporations and technology users, in particular, have need to hold those responsible for the creation and operation of computer technologies as legally and morally responsible for what harms result from them, then there will need to be a greater effort to educate and train those involved with the technologies as to their range of responsibility and the consideration of the possible consequences of what they do.  This education is part of what professionals would do, particularly licensed professionals, and involves the notion of professional responsibilities and will be treated in a subsequent chapter.

The Moral Issues: Applying Ethical Principles and the Dialectical Process

In approaching the questions, issues, problems and dilemmas posed by the situations presented by developments in computer technologies there is a need to analyze the situation and identify the key elements and values that may be involved and the ethical principles that can be brought to bear.  An argument needs to be developed in support of the position that is to be advanced as the preferred position on the moral question.  That position is then examined by others who hold different values or hold the same values in a different order and who would apply ethical principles in a different manner, rejecting one or another for reasons which should be given.  The process continues until there are enough people who think that one position is the best of the alternatives.  Given the nature of the original problem or question and the size of the populace who hold the one position of the majority there may be social policies or even legislation that would result.

Values

Among the highest values held by many people is that of health and life.  There ought not to be unnecessary or gratuitous acts of harm to people.  As more of human life is impacted by computer technologies there have been acts of harm to humans as a result of either faulty programs or faulty operations of those programs or some other actions of humans involving computers and information networks.  There are many who think it quite important that those responsible for harming humans be identified and held accountable including the assigning of penalties for irresponsible behavior. 

Ethical Principles

Ethical egoists might think that their actions or omissions ought not to be the held for review by others and responsibility for harm assesses and penalties imposed. It is difficult to use other ethical principles that could be used to support that general conclusion.

In applying utility to the question of accountability and responsibility for harms the interests of the greatest number of persons is served better by holding people accountable for their actions or omissions and determining responsibilities and liabilities and the assigning of penalties.

For Kant the Categorical Imperative would lead one to conclude that it is morally correct to respect the lives and well being of others and to avoid causing any harms.  Thus there is a moral ground for holding those responsible for the harm as liable for it in order to diminish the possibility of harms.  The issue then becomes one of determining just what agents are or were responsible for what outcomes or consequences of their actions or omissions. . 

For Rawls the Principle of Justice involves promoting a maximum of liberty while improving the lot of those least well off-minimizing the differences.  Those harmed would be the least well off and their position would be improved if there would be a holding of those responsible for the harms accountable and imposing penalties upon them.  Such a course of action would tend to make people behave with greater concern for the welfare of others and that would improve the situation for the least well off as the likelihood of their risk or exposure to harm would be decreased by an increase in caution and care on the part of computer specialists.

Other arguments can be advanced in support of resolutions of this issue of accountability and an argument can be developed using a multiplicity of ethical principles in support of particular conclusion as to what resolution is morally correct.

************************************************************************************************

Reflections on Information Technology Accountability by, Lindsey Pehrson CUNY SPS 2009

 

Human beings have an inherent need to know about their world, as our long history of testing previously held boundaries in search of new territory has shown. This desire to break outside of the familiar has shaped civilizations by establishing the founding of new frontiers, both physical and technological. When these endeavors do not turn out as we might have expected, or we are injured in any way during the process, we also have a tendency to blame someone. This concept of fault has taken on various forms over the years, expanding to include numerous definitions and leaving us with the continual question of what is accountability and how is it decided? In more recent decades, as our information technology has expanded, society has also been confronted with new questions of liability: can a computer be responsible when something goes wrong? How is blame assigned? What is the recourse for damages? How is blaming a computer different from blaming another human being?

            In the English language, there are numerous words for accountability. This concept of placing blame and taking blame consistently makes it into our daily language. Every time we tell a story about something that has happened to us, we usually include a description of the event along with the details that essentially identify the person who played the culprit and the individual who was the victim. As the story lines get more complicated, it can become harder to identify the guilty parties. This is particularly true when we reiterate the tales about our technology breaking down. When a series of activities inside our networks or our computers create a flawed result, an error that harms us in some fashion or goes against our primary objective, how do we clearly pinpoint blame? In the end we find that laying blame on a human being is more easily acceptable than blaming an inanimate object that cannot give its side of the story or offer justification. We need someone to fight back and clarify his intent so that we can win a resolution and find satisfaction in the exchange.

            Knowing that our human ability to judge separates us from the programmed and potentially error-filled judgments of information technology, can we claim fairly that a computer has moral responsibility? If a computer is not morally responsible, then who is responsible for computers? Discussing this opens the door to bigger questions such as what exactly is responsibility? What is accountability? And what does it really mean to be morally responsible? According to ethics writer Deborah Johnson, and the Stanford Encyclopedia of Philosophy, there are numerous forms of accountability. There is role responsibility (the manner in which a person is expected to act based on their societal role), causal responsibility (when someone does something, or fails to do something, that causes a problematic event), blameworthiness (an individual commits an error, either by doing something they are not supposed to or not doing something they are supposed to, and that action directly leads to a problematic situation or event-often times this is the failure to fulfill your role responsibility), liability (when the law identifies something that has been done incorrectly and instructs damages to be paid by some group or person), strict liability (liability that occurs without anyone being directly at fault, usually this happens when an agent is required to pay damages even though there may be no clear event or admission of wrongdoing), negligence (when someone does not act in accordance with the pre-set guidelines of their profession or role, thereby inflicting harm), and mixed responsibility (a case of both a product and a service being defective, i.e with the customization of software for a specific client).

 

Clearly, we have a lot of methods and requirements when it comes to ensuring that someone takes the blame. Unfortunately, deciphering who has done what and at which point can be exceeding difficult, particularly when it comes to computers. One reason for this is known as the “problem of many hands.” It refers to the fact that in the creation of every single computer, numerous people are involved, making placing blame on a single entity very hard. Furthermore, though there is a procedural pattern that is followed, there are an infinite number of decisions that must be made which make each resulting product unique. The problem of many hands raises numerous questions that are not easy to answer. First, are these errors that occur the result of intentional or accidental actions? Did the manufacturer know about the potential for defects prior to the release of the product? Once in our care, did we, the owner, do something or forget to do something, which was critical to the proper functioning of the device? And what if the computer components are removed from one computer and used in another for purposes they were not originally designed for, can the original creators of the devices still be held accountable? This brings up a topic must be addressed when talking about the liability of others: when we place blame, we must also be willing to take blame. This means fully identifying as many possibilities for why something went wrong, including errors committed by owners and not creators.

 

According to James H. Moor, a second issue preventing clear accountability in information technology is the invisibility factor. We have no one standing in front of us to place blame on when we are using the computer. We think that someone, somewhere must be responsible, or we get mad at the computer itself. Deborah Johnson echoes this idea in her theory of Virtual Action. She believes that computers must be made in a manner that makes their flaws more apparent and their internal circuits more accountable for breakdowns that occur. Her point is that someone is responsible for the object in front of you, even if that person is hard to find. It is not the fault of the computer. For those that still argue that computers are accountable for their internal actions, the Dutch Center for Ethics and Technology has a few points to make. They claim that in order for technology to me moral (something which the Myth of Amoral Computing tries to resolve), it must meet two requirements. First, there needs to be a person that is responsible for its morality. Second, the distribution of the product must be fair and unbiased. In other words, t appears from these guidelines that The Dutch Center enforces the ideology that computers themselves cannot be moral or immoral. Rather, it is the person responsible for them that should be answerable for their actions or blunders.

 

A third reason computing makes accountability difficult is because we often give ownership without making someone liable. For example, ISPs have the right to allow whatever they choose to air on their computer pages, however, if any of that material is found offensive or defamatory, it is extremely hard to say the ISP is to blame. There are in fact only a handful of major cases where this has been challenged, and the outcome tended to favor the protection of the ISPs. A fourth entity that makes responsibility hard to pin down is the fact that when it comes time to blame a software company or developer, they can very easily say that the original request was not clearly articulated, meaning they are not responsible if the end result was imperfect.

 

A final argument, which also serves too often as a commonly held belief by populations, is that technology cannot be responsible because it is ethically neutral. We need to consider this concept very carefully. Human beings are not perfect, we are not infallible, and we are certainly not unbiased. By extension, no matter how much we might striver for perfection, we the biased many are essentially incapable of making a technology, or anything else for that matter, that is totally free of bias. We might strive to, but eventually we are going to let bias slip through, it would be impossible for it not to. So, when a computer messes up, aren’t we as the creators, responsible, even if we don’t know whose hands caused the problem? Based on this idea it follows that if we place computers in charge of ethical decisions and jobs that require reasoning, we are the ones who should bear the guilt of irresponsibility when they don’t perform as expected. There are certain jobs, like that of the Therac-25 machine, which are too important to be left to an unmonitored medical device. When it involves human life, we cannot just leave it up to nuts and bolts. True, technology facilitates our roles as caregivers and helps us to be more responsible and efficient in our daily lives, but it must be used in conjunction with our own moral and ethical standards.

 

Another thing we need to consider is that, as human beings, no matter whether we believe a computer is accountable or a person is, the simple fact remains that getting angry at a computer and making it liable is as about effective at resolving the issue as yelling at a wall. It takes two people to argue, and resolution is not reached until one person can say I’m sorry. Then the other person forgives and they try to move past it, or they continue arguing. Computers cannot say I’m sorry. Getting angry with one doesn’t create satisfaction or a means for a resolution; it only serves to inspire more frustration. For this reason, I think that it is possible that many people do not hold the computer accountable because they need to hold another human being liable, someone that they can have a verbal conversation with about the incident. Most human beings vent when they get mad. The human that is the target of our wrath, the person who we think has done us wrong, can justify, clarify and eventually either make us see things in a new light, or give up and apologize. We need this series of steps in order to resolve the conflict between us, and within ourselves. True, there are smart computers that continue to be perfected and can “think” for themselves, but at the end of the day, if you talk to one of those computers too long, they eventually misunderstand or repeat themselves. And who really wants to type out their frustration, or wait for a computer to process their verbal cues?

 

According to case history, when a product breaks, or there is an outbreak of salmonella in our peanut butter, we don’t blame the peanuts or the machines that roasted, creamed and packaged them. We blame the companies that produced them and make then liable for all damages. When a series of highchairs have been found to be defective, we don’t blame the chairs; we go straight to the maker. Computers, though complex, are still products. True they have many more components, but they are the creations of a specific company. When something goes wrong, it is up to the company to explain why. If the internal processor is bad, then it is their responsibility to get an explanation from their suppliers, but they should have control over their products. That is what customer service and responsibility in business is all about.

 

In order to understand why we attribute blame where we do, we need to take into consideration the values that serve as our basis for judgment and action and are particularly applicable to the topic of accountability. First, human beings have a right to live and prosper. Perhaps this is most especially seen in our democratic societies. Second, they have a desire to be healthy, their well being taken care of. This is shown through the numerous healthcare and disaster support systems available, and the repeated need of human beings to seek comfort from others. Third, they have a need to achieve justice when a wrong has been committed. This is echoed in the extensive and continually changing legal systems that can be found worldwide. We like having someone on our side, a jury of our peers or otherwise, to help us attain fairness and establish who is liable. As progress is made in the information technology sector, the changes have affected and shaped the methods for and ability to achieve legal justice, protection and responsibility. The creation of computers, in particular, has established both untold goods and efficiencies for man, as well as allowing for extreme cases of harm.  

 

From an ethical viewpoint, egoists would insist that the morally good option is the one that benefits them the most. This means that, as software creator egoists, they firmly support that any omissions or errors made by them during the creation of information technology should be held free of blame. It is important to recognize that there are no other ethical theories that so firmly support this point of view. Alternatively, ethical egoists that are computer technology users would insist that holding creators of information technology accountable for their actions when something goes wrong is the morally correct choice. Unfortunately, ethical egoism has no method or suggestions for resolving this conflict between both sets of egoists. In the Utilitarian perspective, doing what satisfies the interests of the largest group of people is most morally appropriate. Human beings value their health, their right to live, and legal and societal justice when they feel something wrong has been done to them. Therefore, as it appears to serve the largest number of individuals to hold various agents responsible for their actions, as well as assign penalties and damages in accordance with the level of wrongdoing as determined by a jury or judge, or individuals and groups of peers in a non-judicial setting.

 

Kant’s theory of Categorical Imperative states that the most ethically viable choice is to avoid causing harm to others, and respect their sense of wellbeing. As such, there is a sense of true morality in making people answerable for their actions in order to fully advance the interests and safety of society as a whole. The question, as usual, then turns to one that requires usually painstaking work: determining just who is culpable, and if blame should be split among groups based on their individual tasks and roles. Sadly, Kant’s theory does not say how to assign the fault. Rawls Principle of Justice also advances the belief of holding parties at fault. It states that society should act to advance the liberty of the maximum number of people while minimizing the inequalities held by the least well-off group. If individuals were faced with the notion of either being free from any blame, ever, or being able to hold someone accountable when wrong was done to them, but did not know which group they would be in, most individuals would seek to advance their own position by ensuring justice was an option. This means that computer programmers and others that hold the key to creation of information technology would be made blameworthy when their work is not up to standards. Ensuring this means the ultimate advancement of society’s interests.  

  

Most of the ethical principles can be used to support the argument that that the creators of information technology should indeed be made responsible for their actions and decisions. It also appears that doing this would advance society as a whole and ensure its continue health and wellbeing. We have seen that democratic civilizations care about justice for their people. Members of these cultures also seem to possess an inherent need to have wrongs against them righted. On a final note, it is imperative to acknowledge that when members of society feel that they cannot get justice, these unrecognized victims tend to take matters into their own hands in the form of vigilante justice. This type of behavior affects the system of justice, as well as blurring the lines between right and wrong. Can we still hold someone accountable for their actions if someone else violated the law and hurt them, but the legal system could not or would not interfere? Some say the answer is yes, a crime is a crime and you are to be held in charge of your actions. Others have argued that the individuals in question cannot be blamed, that they were acting under duress and out of desperation. My point is that we must carefully weigh the merits of allowing the authors of computers everywhere to walk away without any blame, or to merely blame inanimate objects. Human nature craves accountability, it would be dangerous not to address and fulfill this need.

***********************************************************************************************

turn to next section

Web Surfer's Caveat: These are class notes, intended to comment on readings and amplify class discussion. They should be read as such. They are not intended for publication or general distribution. ppecorino@qcc.cuny.edu                @copyright 2006 Philip A. Pecorino                       

Last updated 8-2006                                                              Return to Table of Contents