oplaTech About Teaching Archive
Oplatek's external memory

What is an agent?

How to define agent?


[Wooldridge, 1995]

  • reactivity

  • proactiveness

  • social ability



Questions:

  • thermostat, Unix demon( proactiveness-yes, reactivity- yes, social ability- not even by common agents:))



Remarks:

  • The abstraction we use if we want to think about agents

  • Difference between UML object and agent? agent wants gain - money.... It is vague

  • Important is goal of agent?

  • Important is environment of agent, autonomous



Belief - Desire - Intention (BDI) architecture



  • Goal of BDI is to specify rational behaviour

  • Belief - somehow the memory of an agent

  • From Desires to Intentions - Intentions is thing we are working on, but they should be in contradiction

  • desires: pass Exams, go to a party, .... Intentions -Lets go to the party ( in order to be rational I can not do studying)

  • Intentions should be persistent for some time: Granny is going to a sweet-shop, but she is not rational if she reconsider
    the decisions on every step. However, reconsider it when she found out the the sweet-shop is closed is rational.



Why the rationality should be defined like this: Bratman: Practical Reasoning

Languages implementing BDI




  • Jasson

  • Goal

  • ....

  • POSH - no so strict, but we are testing in our labs



POSH reminds behaviour trees

Question is whether these languages are used in AI programming in Computer games industry

Game: Black and White



  • Used BDI architecture

  • The agents are learning

  • AI game programming wisdom I chapter 11.2

  • Means-ends reasoning (CZE- analyza prostredku a cilu)

  • desire - Huger - Perceptron - User learns the agent to prefer eating why beeing sad/seeing food/low_energy

  • The agent builds up simple decision trees

  • Just reactive planner, no future Desires, Intentions

  • BDI- Opinions(decisions tree)..in fact belongs to Belief section in BDI architecture, Belief(attribute list), Desires(perceptron), Intentions - planner choose the best plan (script) which satisfies our current desires



Comparison:

  • Tyrell architecture does similar thing with inhibiting other actions if first actions is choosen



How to speed up if-then reles



  • does not recompute the already computed conditions from rules above

  • evaluation - RETE (slides H-likeAgents8_Brom_060515.pdf )

  • We have to able to QUICKLY evaluate if the atomic conditions CHANGED in current role

  • In RETE could be the listeners(sth has changed) very expensive - they can be added on the fly