(How) Can Software Agents Become Good Net Citizens?, by Sabine Helmers, Ute Hoffmann, and Jillian Stamos-Kaschke
Bots and Humans Together
So far, human Netizens are living on the Internet with more and more non-human Net inhabitants "living" among them. In order for the non-human Net inhabitants to coexist and interact peacefully, they rely on voluntary self-regulation and appealing to peoples' sense of responsibility. This seems to be sufficient. The social and technical development of the Internet is based on trust in the assumption that the Internet successfully regulates itself. Only very few questions are dealt with under the observation and coordination of central regulatory boards such as the Internet Assigned Number Authority (IANA) in charge of Internet Addresses and Domain Name administration, for example. Above such basic level network administration and technology development (e.g., the Internet Protocol for data transmission) there are no commonly accepted authorities established on the Internet.
But nowadays, more and more people on the Net are organizing and engaging in all kinds of "neighborhood watch" for various goals or problems which they care for, such as protecting the traditional freedom of the Net (e.g., the Electronic Frontier Foundation) or, on the contrary, trying to constrain traditional free ways of interacting for the sake of non-traditional Internet user groups, such as children, and to make Internet a safe place for all (e.g., Cyber Angels) or to bring proper law and order into the Net (e.g., the Internet Law and Policy Forum). Considering the possible negative potential that the growing number of various kinds of agents might inhere - apart from their supposed helpfulness - it might be worth considering keeping an eye on the non-human actors on the Net as well. If the rules of Netiquette apply to all Netizens and there is a form of social control as concerns the behavior of human actors on the Net, then why should non-human actors not be included?
If non-human actors turn out to be problematic in any way (e.g., by overly consuming bandwidth or annoying other Netizens by email flooding), then maybe even organized forms of specific neighborhood watch over fellow non-human Netizens' behavior would be necessary. A drawback of this form of "agents watch" is that one only has the chance to respond to negative outcomes when they occur and thus react to software agents which have already been developed and brought online. We have to trust in the developers' sense of responsibility. And if the software has already been developed, a chance to really prove what the software will do is only given if the source code is publicly available. The source code of commercial software (which is more frequently used than public domain software, for which source codes are publicly available) is kept secret as the makers of soft drinks keep their recipes secret, because if you tell how to make a product then it can be copied by either the consumers themselves or other firms.
There is no way to prevent any negative effect beforehand, during the development process. All that Netizens can do is to see what happened when they bought the software and used it online. And if a supposedly helpful servant of yours secretly acts as someone else's servant and gives out secrets -- your online behavior for example -- because the second secret master wants your data for marketing research or for political control as in totalitarian countries, you will have to wait until you first notice that your harmless servant is serving another individual or group's purpose. If problems like this occur on a larger scale, then it would be a sensible idea to appoint an independent control board for the examination of software agents like consumer boards do with other products or the Food and Drug Administration does with formulas and medical devices and examine products which are already available or which are to enter the market.The test results could then be distributed throughout the Internet. Agent look-up might be facilitated by agent registries. Registries could be either domain-specific, i.e., according to the location of the host agencies in which the bots reside, or according to the four general bot habitats: the Web, Usenet, IRC, and MUDs.