INTELLIGENT
SOFTWARE AGENTS AND PRIVACY
Introduction
Nowadays,
computers are commonly used for an increasing range of everyday activities.
Most of
these
activities are based on the acquisition of information. At present, users still
interactively and
directly
initiate all actions needed for a computer to execute tasks.
Due to the
enormous and fast-growing amount of data that is available, sometimes referred
to as
‘information
overload’, it is impossible to sustain the current way users interact with their
computers.
Instead of direct user-initiated interaction, users and computers should be
engaged in a
co-operative
process, a process in which both users and computers can initiate actions to
accomplish
tasks. In this
way, a computer could continue its activities without waiting for the user to
activate it.
With the use of
software agents1, computer systems are capable of executing actions with
minimal
interference by
their users. This gives the user more time to spend on other activities. The
idea of
software agents
is not new. Over the years numerous researchers have been working on this
issue.
For purposes of
this study, a software agent is defined as a piece of software that acts on
behalf of its
user and tries
to meet certain objectives or complete tasks without any direct input or direct
supervision
from its user.
The lack of supervision however, could lead to undesirable actions, such as the
violation of
the privacy of individuals concerned. Besides acting independently on behalf of
their
users, agents
may have a number of other properties, e.g. mobility, reasoning, learning,
co-operation
(negotiation)
with other agents, and cloning.
It is still
unclear what commercial direction this technology will take because the
technology is still
in the early stages
of development. There are, however, two identifiable trends. The first trend
concerns
software agents that have been or are being developed to help people perform
routine
tasks; tasks
that people could probably do themselves if they had the time. These software
agents
are far from
‘intelligent’. The first wave of products is hitting the market now. The other
trend is
driven by
researchers in the field of Artificial Intelligence (AI), who are trying to
combine Artificial
Intelligence
with the agent philosophy to create an ‘intelligent’ agent. A great deal of
research and
development
effort has and will continue to be devoted to the field of intelligent agents,
but no
products are
commercially available yet. A good example of such research and development is
the
agent named
Phil produced by Apple Computer. Phil appears in the promotional video called
‘The
knowledge
navigator’ made by John Sculley.
It is sometimes
desirable to control the introduction of new technologies. There are several
ways of
doing so. One
is by means of government regulation, where the application of new technologies
has to
meet current
government rules. Due to the pace of present-day developments, the formulation
of new
government
regulations governing new technologies practically always lags behind. Most
government
regulations are
therefore adopted or amended after these new technologies have been accepted by
industry.
Consequently, the responsible government organisations are responding
reactively. This
leads to a
steadily widening gap between new technologies and adequate government
regulation.
One of the
independent organisations that executes and interprets government regulations
designed
to protect the
privacy of all Dutch inhabitants is the Dutch Data Protection Authority (in Dutch:
the‘Registratiekamer’). The Registratiekamer is a privacy protection agency
that oversees compliance
with the
jurisdiction’s privacy laws. It is the responsibility of the Registratiekamer
to warn all Dutch
consumers of,
and protect them against, the possible consequences of technologies, especially
new
technologies,
for their privacy. Its policy is to propose privacy regulations governing new
technologies
before these
technologies hit the market. The Registratiekamer also looks for (new)
technical
measures, such
as cryptographic tools like (blind) digital signatures, that could enforce
these
privacy
regulations. Such technical measures to preserve the privacy of individuals are
called
Privacy-Enhancing
Technologies (PETs). The Registratiekamer therefore needs to study new
technologies,
and the impact these technologies might have on the privacy of individuals.
Hence,
one of the
roles of the Registratiekamer is to act as an adviser and partner in the
development of
these
technologies.
The Information
and Privacy Commissioner/Ontario has a mandate under the Ontario Freedom of
Information and
Protection of Privacy Acts to research and comment upon matters relating to the
protection of
privacy with respect to personal information held by government organizations
in
Ontario. In the
fulfilment of that mandate, the IPC is concerned that all information
technologies, if
not properly
managed, could represent a threat to the privacy of the residents of Ontario.
TNO Physics and
Electronics Laboratory (TNO-FEL) is one of the three institutes that form TNO
Defence
Research, part of TNO, the Netherlands Organisation for Applied Scientific
Research. With
a long history
in research and development, application and integration of new defence
technologies,
TNO-FEL has
traditionally devoted the majority of its resources to meeting the demands of
the Netherlands
Ministry of Defence and Armed Forces. Today however, TNO-FEL participates in
international
as well as national defence programmes and operates in close co-operation with
technological
institutes, industry and universities both inside and outside the Netherlands.
The
Registratiekamer and the Information and Privacy Commission (IPC), in
association with TNO
Physics and
Electronics Laboratory (TNO-FEL), conducted an earlier study of technologies
that
could improve
the privacy of individuals in 1995. The results of that study are published in
(Hes, R.
and Borking, J.
editors, 1998, revised edition). A summary of the results of this study is
included in
appendix A. Two
of the technologies studied were blind digital signatures and Trusted Third
Parties
(TTP’s).
The
Registratiekamer believes that (intelligent) agent technologies could
jeopardise the privacy of
individuals.
However, these technologies may also be used to protect the privacy of
individuals. A
special privacy
software agent could be developed to exercise the rights of its user, and to
enable
this individual
to protect him or herself against privacy intrusions with the aid of a PET.
Therefore,
the
Registratiekamer decided to study the privacy aspects of these agent
technologies pro-actively.
Once again,
this study was conducted in close co-operation with TNO-FEL.
Agent
technology
Software agents
have their roots in work conducted in the fields of software engineering, human
interface
research and Artificial Intelligence (AI). Conceptually, they can be traced
back to the late
seventies when
their predecessors, the so-called ‘actors’, were introduced. These actors were
selfcontained
objects, with
their own encapsulated internal state and some interactive and concurrent
communication
capabilities. Software agents developed up to now can be classified under
Multiple
Agent Systems
(MAS), one of the three branches of distributed AI research, the others being
Distributed Problem
Solving (DPS) and Parallel Artificial Intelligence (PAI) (Nwana, H.S. and
Azarmi, N.,
1997). Technically, they exhibit many of the properties and benefits common to
distributed AI
systems. These properties include:
– Modularity.
A modular programming approach reduces the complexity of developing software
systems.
– Speed. Parallelism,
the concurrent execution of co-operating programs, increases the execution
speed of the
overall system.
– Reliability.
Built-in redundancy increases the fault tolerance of an application, thus
enhancing its
reliability.
– Operation at
the knowledge level. Utilisation of AI techniques allows high-level messaging.
– Others.
These include maintainability, reusability and platform independence.
Reasons for
software agents to exist
Research and
development efforts in the area of agent technologies have increased
significantly in
recent times.
This is the result of a combination of ‘market pull’ and ‘technology push’
factors.
The key factor
triggering the ‘market pull’ is information overload. In 1982, the volume of
publicly
available
scientific, corporate and technical information was doubling every five years.
By 1988 it
was doubling
every 2.2 years; by 1992 every 1.6 years. With the rapid expansion of the
Internet (the
Net), one can
expect this rate of increase to continue, which means that by 1998 the amount
of
information
will probably double in less than a year. This dramatic information explosion
poses a
major problem:
how can information be managed so that it becomes available to the people who
need it, when
they need it? How should one organise network flows in such a way as to prevent
massive
retrieval of information from remote sources from causing severe degradation of
network
performance,
i.e., how can one ensure that network capacity is used economically? Software
agents
hold the
promise of contributing to providing a solution to this problem. Agent
technologies can be
used to assist
users in gathering information. Agents can gather and select this information
locally,
thereby
avoiding unnecessary network loads. What distinguishes (multi-)agent
architectures from
other
architectures is that they provide acceptable solutions to certain problems at
an affordable
price.
The key factor
triggering the ‘technology push’ is the rapid development of communication and
information
technology. At present, communication technology offers communication
facilities and
solutions with
increasing capabilities – both in terms of bandwidth and speed – at decreasing
cost.
Definition of
agents
There is no
general agreement on a definition of the word ‘agent’, just as there is no
consensus
within the
artificial intelligence community on a definition of the term ‘artificial
intelligence’. In
general, one
can define an agent as a piece of software and/or hardware capable of acting in
order
to accomplish a
task on behalf of its user. A definition close to present-day reality is that
of
Ted Selker from
the IBM Almaden Research Center:
‘An agent is a
software thing that knows how to do things that you could
probably do
yourself if you had the time’.
For the rest of
this study, the first trend mentioned in chapter one, the development of agents
to help
people perform
routine tasks, will be ignored.
Agents come in
many different flavours. Depending on their intended use, agents are referred
to by
an enormous
variety of names, e.g., knowbot, softbot, taskbot, userbot, robot, personal
(digital)
assistant,
transport agent, mobile agent, cyber agent, search agent, report agent,
presentation agent,
navigation
agent, role agent, management agent, search and retrieval agent,
domain-specific agent,
packaging
agent. The word ‘agent’ is an umbrella term that covers a wide range of
specific agent
types. Most
popular names used for different agents are highly non-descriptive. It is
therefore
preferable to
describe and classify agents according to the specific properties they exhibit.
An example of
an agent is a Personal Digital Assistant (PDA), which is described in the
following
metaphor (Abdu,
D. and Bar-Ner, O.), which describes the co-operative, mobile, and learning
processes that
are present in a PDA.
Metaphor:
‘Bruce awoke
instantaneously at 06:00 AM sharp, expecting a long day of helping his boss,
Hava.
He took a look
at Hava’s daily schedule and then went to the mailbox to see what other
meetings and
appointments he
would have to squeeze in today. There was a request for an urgent meeting from
Doris,
Seppo’s
assistant. He contacted Doris, informing her that Hava had half an hour free at
10:00 AM or
at 5:00 PM and
that Hava personally preferred morning meetings. Doris confirmed 10:00 AM and
Bruce
posted a note
for Hava. Next on his agenda, Bruce went about sorting through the rest of
Hava’s mail and
news bulletins,
picking out a select few that he believed would satisfy her reading habits and
preferences.
At about 9:30
AM he caught a message from Hava’s best friend that tonight she was free.
Knowing that
Hava likes
going with her friend to movies and that she had not yet seen ‘Brave Heart’
with Mel Gibson,
her favourite
actor, Bruce decided to buy them a pair of tickets to the early show and make
reservations at
Hava’s
favourite restaurant. He stepped out and zipped over to the mall, to the ticket
agency, and discreetly
bought the tickets
with Hava’s VISA number. He returned with a big smile on his face and notified
Hava of
her evening
plans. At about 01:00 PM he received an urgent message from Hava telling him
that she was
happy about
tonight’s arrangements, but did not want to see ‘Brave Heart’ because it was
too violent
for her. Bruce
noted Hava’s aversion to violent films for future reference and hurried back to
the mall totry and sell the tickets to someone else and then buy tickets to
‘Sense and Sensibility’ (Hava just loves
Emma Thompson).
At 7:00 PM, before leaving for the movie, Hava notified Bruce that he had done
well
today and then
she turned off the computer (and Bruce of course) for the night.’
Agent ownership
Agents could be
owned by individuals or organisations. These agent-owners can use their agents
to
carry out tasks
to fulfil their owner’s purposes; and to offer agent services to individuals or
organisations
that are not in a position to own an agent. In the metaphor provided above, the
agent
Bruce could be
owned by its boss Hava, but Hava could also have hired Bruce from a company or
organisation
that provides agents. There are a number of reasons why Hava would not be in a
position to own
her own agent. One of the reasons relates to the cost of purchasing an agent or
the
hardware needed
for the proper operation of the agent. Another reason could be the number of
tasks that Hava
wants to delegate to the agent. If the number of tasks is very small, let’s say
fewer
than 3 tasks a
year, it is better to hire an agent than to use her own agent.
Service-providers,
such as Internet service-providers, could provide a network infrastructure with
strong
network-servers, and local workstations with only the necessary hardware and
software to
connect to the
network-servers. This structure could also be provided by cable-tv companies,
which
already have
the cable infrastructure and want to provide more services to their
subscribers. Such a
network
infrastructure will reduce the costs of the workstations and, therefore,
increase the
possibil-ities
for financially less well-endowed individuals to use the Net. These
workstations leave
practically no
room for the installation of additional (local) software, including user-owned
agents.
People who use
these services will end up using agent-services that are provided by the
networkprovider.
When using an
agent provided by an agent-provider, the personal data that is provided to the
agent
in order to
create a user-profile can be passed on to, and recorded by, this
agent-provider. This
could be an
undesirable situation for an individual, especially for individuals who are
concerned
about their
privacy. This might be an argument for only using an agent that is owned by the
individual. It
could also be a good reason to draw up an agreement between the individual and
the
agent-provider
which contains, for example, a privacy-intrusion liability clause.
Interaction
between users and agents
In activating
an agent, a user not only delegates tasks to it but also delegates
responsibility and
competence. The
interaction between a user and the agent might be compared to the interaction
between a boss
and a secretary or a master and a servant. By delegating tasks,
responsibilities and
competence the
user loses control over a considerable amount of the agent’s activities. It is
therefore
imperative that
the user can trust the agent that is used, just as the boss trusts his or her
secretary,
and the master
trusts his or her servant.
A lack of trust
could be the result of a difference between the working methods of the user and
the
agent (Norman,
D.A., 1994). If the user doesn’t know what his or her agent is doing, or isn’t
content
with the way
the agent works, he might consider never using this agent again. There should
be some
kind of
agreement between the agent and the user, as there is between secretary and
boss where the
agreement is
often based on mutual engagements. The agreement will be tried out for a
probationary
period. During
this period both parties can decide whether they accept the agreement. A user
should
have a
description of the working method of the agent in order to learn more about it
before using the
agent. In this
way, the user knows what to expect from the agent, and can decide the extent to
which
he can trust
the agent.
A lack of trust
could also be avoided by increasing the discretion of the agent. The longer an
agent
works for its
user the more it will know about him or her. As in the relation between master
and
servant, where
the servant knows practically everything about his or her master, it becomes
very
important that
he handle this information with the utmost discretion. The servant will be
engaged
on account of
this quality. It is essential that agents have the means to protect the privacy
of their
users. These
means take the form of Privacy-Enhancing Technologies (PETs)
Classification
of Agents
Agents can be
classified according to the specific properties, or attributes, they exhibit
(Nwana, H.S.
e.a., 1997 and
Abdu, D. e.a.). These include the following:
– Mobility.
This refers to the extent to which an agent can move around a network. This
leads to a
distinction
between static and mobile agents. Sometimes this includes cloning to distribute
sub-tasks in a
remote environment.
– Deliberative
behaviour. Deliberative agents possess an internal reasoning model and
exhibit
planning and
negotiation skills when engaged with other agents in order to achieve their
goals.
In contrast
with deliberative agents, reactive agents lack an internal reasoning model, but
rather
act upon the
environment using a stimulus-response type of behaviour.
– Primary
attributes. The most important attributes of an agent are referred to as
primary attributes;
less important,
or secondary attributes, are listed below. The primary attributes include the
following three:
– Autonomy:
reflects the ability of agents to operate on their own, without immediate human
guidance,
although the latter is sometimes invaluable.
– Co-operation:
refers to the ability to exchange high-level information with other agents: an
attribute which
is inherent in multiple agent systems (MAS).
– Learning:
refers to the ability of agents to increase performance over time when
interacting
with the
environment in which they are embedded. In (Nwana, H.S. and Azarmi, N. 1997),
agents
combining several of the primary attributes are referred to by different names
again:
autonomous
agents that co-operate are called collaborative agents, those that learn are
referred to as
interface agents, and those that do both are termed smart agents.
– Secondary
attributes. Agents can be classified according to a number of other
attributes, which
could be
regarded as being secondary to the ones described above. Rather than a
comprehensive
list, some
examples of secondary attributes that agents may exhibit will be given. Agents
may be
classified, for
example, by their pro-active versatility – the degree to which they pursue a
single
goal or engage
in a variety of tasks. Furthermore, one might attribute social abilities to
agents,
such as
truthfulness, benevolence and emotions (anger, fear), although the last is
certainly
controversial.
One may also consider mental attitudes of agents, such as beliefs, desires, and
intentions (in
short: BDI’s).
By combining
these properties and attributes, (Caglayan, A.K. and Harrison, C.G., 1997)
hybrid
agents and
heterogeneous agents can be constructed. With hybrid agents two or more
properties
and/or
attributes are combined in the design of a single agent. This results in the
combination of the
strengths of
different agent-design philosophies in a single agent, while at the same time
avoiding
their
individual weaknesses. It is not possible to separate such an agent into two
other agents.
Heterogeneous
agents combine two or more different categories of agents in such way that they
interact via a
particular communication language.
Intelligence
and Agency
By varying the
extent of the learning attribute an agent’s intelligence can range from more to
less
intelligent. By
varying the extent of the attributes autonomy and co-operation an agent’s
agency can
vary from no
inter-activity with the environment to total inter-activity with the
environment.
In this case,
intelligence relates to the way an agent interprets the information or
knowledge to
which it has
access or which is presented to it (Caglayan, A.K. and Harrison, C.G. 1997).
The most
limited form of
intelligence is restricted to the specification of preferences. Preferences are
statements of
desired behaviour that describe a style or policy the agent needs to follow.
The next
higher form of
intelligence is described as reasoning capability. With reasoning, preferences
are
combined with
external events and external data in a decision-making process. The highest
form of
intelligence is
called learning. Learning can be described as the modification of behaviour as
a result
of experience.
Appendix B gives a more detailed description of reasoning and learning.
Agency relates
to the way an agent can perceive its environment and act on it (Caglayan, A.K.
and
Harrison, C.
G., 1997). Agency begins with asynchrony, where the agent can be given a task
which it
performs
asynchronously with respect to the user’s requests. The next phase of agency is
user
representation,
where an agent has a model of the user’s goals or agenda. In subsequent phases,
the
agent is able
to perceive, access, act on, communicate and interact with data, applications,
services
and other
agents. These phases are called: data inter-activity, application inter-activity,
service
inter-activity,
and agent inter-activity.
By combining
intelligence and agency, it becomes possible to indicate where ‘intelligent’
agents are
positioned.
Figure 2.1 illustrates this. Agents that are positioned in the shaded area are
more or less
‘intelligent’
agents.
The future of
software agents
Some say
‘intelligent’ agents are the stuff of Science Fiction, but is this really so?
No, we don’t think
so – the future
is close at hand. Many current developments in R&D laboratories deal with the
problems of
intelligence, adaptive reasoning and mobility. Nevertheless, people have
exaggerated
expectations
about agents due to the natural enthusiasm of researchers. Researchers see far
into the
future and
imagine a world of perfect and complete agents. In practice, most agents
available todayare used to gather information from public networks, like the
Net. Many user-initiated actions are
still needed
for these agents to accomplish their tasks. This means that most agents are
still reactive,
and have not
yet developed as far as most researchers would like. So, today’s agents are
simple in
comparison to
those that are being planned’ (Norman, D.A., 1994 and Minsky, M. e.a., 1994).
However already
in 1990 philosophers (De Garis, H., 1990) warned that in the near future (50
years),
it is likely
that computer and communication technology will be capable of building
brain-like
computers
containing billions of artificial neurons. This development will allow
neuroengineers and
neurophysiologists
to combine forces to discover the principles of the functioning of the human
brain. These
principles will then be translated into more sophisticated computer
architectures and
intelligent
software agents. This development might well become in the 21st century a
global
political
issue. A new branch of applied computer ethics is needed to study the profound
implications of
the prospect of life in a world in which it is generally recognised to be only
a
question of
time before our intelligent software agents and computers become smarter then
we are.
Privacy-Enhancing
Technologies
As stated in
chapter 3, privacy regulations and privacy guidelines have been drawn up by
various
governments and
international governmental organisations. Tough measures are needed to enforce
these
regulations. Up to now, these have taken the form of inspections or audits to
verify whether all
organisations
that collect personal data are complying with the privacy regulations. These
inspections
are
time-consuming and therefore expensive. The Registratiekamer searches for
technologies capable
of replacing
inspections for enforcing the privacy regulations. The IPC is also on the
lookout for such
privacy-enhancing
technologies (PETs).
This chapter
will describe the potential and implications of using technologies to manage
the threats
described in
the previous chapter and improve the privacy of individuals in an agent-based
environment.
These threats can be managed by using the Identity Protector (IP) described in
(Hes, R.
and Borking, J.
editors, 1998, revised edition). (Hes, R. and Borking, J. editors, 1998,
revised edition)
also describes
the technologies to implement the IP. These technologies are defined as PETs.
The IP
controls the
exchange of the user’s identity within an information system (for a more
detailed
description of
the IP, see appendix A).
In an agent-based environment the IP can be used in two ways:
In an agent-based environment the IP can be used in two ways:
– between the
user and the agent
– between the
agent and the external environment
When the IP is
placed between the user and the agent, there will be no exchange of personal
data
from the user
to the agent without the approval of the IP and the user. In this way, the user
cancontrol the amount of personal data that is recorded by the agent. This
option could be used to
protect the
user against threats to privacy caused by agent-providers.
Placing the IP
between the agent and the external environment gives the agent comprehensive
powers to
obtain and record personal data from its user. The IP will help the agent to
protect the
personal data
of its user against unwanted dispersion.
The PETs
described in appendix A and (Hes, R. and Borking, J. editors, 1998, revised
edition) to
implement an IP
are only capable of managing some of the threats. To manage the remaining
threats,
existing security technologies that are not yet defined as PETs need to be
applied in such a
way that they
can improve the privacy of individuals. Eventually, these technologies will
also be
called PETs.
Irrespective of
the fact that privacy is not a commodity but a fundamental human right, it has
to be
said that the
protection of an individual’s privacy is still the individual’s own
responsibility and
choice. It is
therefore up to each individual whether to protect it or not. This leaves the
individual
with the
consideration of whether or not to use PETs to secure his or her agent. If the
individual
chooses to
protect his or her privacy, he or she still needs to make a choice about the
extent of the
protection
offered by the PETs. The extent of protection could be defined by the
relationship the
individual has
with his or her environment. This relationship can consist of political,
social, public,
or other kinds
of interactions. If the individual decides to take an agent with PETs with a
high
degree of
privacy protection, this will have consequences for the performance of the
agent.