# Saturday, September 09, 2006

After 2 years in the CLR, I'm moving job (and building) to Microsoft Research. I will be working on Parametrized Unit Testing.

 

posted on Saturday, September 09, 2006 11:21:39 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Friday, June 10, 2005

NSteer 2: Laying down some interfaces

In the previous episode, we laid down the concepts and “lexical” language about autonomous agents. We can start to design interfaces for each concept identified previously.  Here’s a concept summary to get started:

Concept summary

  • Dynamic concepts
    • Agent, the canonical autonomous agent,
    • Behavior, the canonical behavior,
    • Body, an instance defining the kinetic dynamics of the agent,
    • Vision, what the agent sees,
    • Integrators integrate the motion dynamics,
    • Saturators, saturate scalars and vectors,
  • Visualization concepts
    • Visualizable: can be visible/hidden, can be rendered,
    • Sprite: a graphical element with a background and a border
    • Artifact, a dynamic or visual artifact that can be attached to an agent (e.g. smoke trail, hallow, etc…). An artifact is owned by agent.
    • A scene is the graphical layer abstraction.
    • A world provides dimensions and projection methods,
  • Service concepts
    • Agent service, maintains a list of active actions,
    • World service, accesses world instance
    • Integrator service, manages agent movement integration and force resolution,
    • Behavior service, manages local and global behaviors,

On top of those concepts, remember that we are going to work with the System.Components model-container-services interfaces: IComponent, IContainer.

Dynamic concepts

An agent has a body, a behavior, a vision and can be visualized.
public interface IAgent : IComponent, IVisualizable
{
    IBody Body { get;}
    IBehavior Behavior { get;}
    IVision Vision { get;}
}
A behavior computes the steering force and is also considered as an artifact.
public interface IBehavior : IArtifact
{
    PointF ComputeSteering();
}
A body implements the kinetics of the agent and is as well an artifact.
public interface IBody : IArtifact
{
    float Mass { get;}
    PointF Position { get;}
    float Direction { get;}
    Referential Anchor { get;}
    PointF Velocity { get;}
    PointF Acceleration { get;}
    IPointSaturator VelocitySaturator { get; set;}
    IPointSaturator AccelerationSaturator { get; set;}
    void Update(
        PointF acceleration,
        PointF velocity,
        PointF position);
}

There are some comments to do about this interface.
1. IPointSaturator is the interface defining PointF saturator. In fact, it is very important to make sure that the agents have a maximum speed!
2. Referential is the local referential attached to the agent.
3. There is one major lack in this interface, the inertia, the rotation velocity and acceleration. For the sake of simplicity, I decided to leave those aside for now.

A vision determines if an agent can see another, it has a maximum range (and is an artifact)
public interface IVision : IArtifact
{
    float MaxRange { get;}
    bool CanSee(IAgent agent);
}
An integrator integrates the motion dynamics,
public interface IIntegrator
{
    void Integrate(IBody body, PointF steering);
}
A saturator will rescale a vector or a scalar depending on the body state.

It also provides a way to determine the maximum vector norm (norm-2) depending on the “attack” angle.

public interface IScalarSaturator
{
    float Saturate(IBody body, float value);
}
public interface IPointSaturator
{
    PointF Saturate(IBody body, PointF vector);
    float GetMaxNormFromAngle(float value);
}

Visualization concepts


A scene provides the methods to render lines, circles, etc…
public interface IScene : IComponent
{
    void Prepare();
    void Flush();
    void DrawLine(Pen pen, PointF start, PointF end);
    ...  
    void PushTranslateTransform(float dx, float dy);
    ...
    void PopTransform();
}

A visualizable element can be visible/hidden and rendered,
public interface IVisualizable
{
    bool Visible { get;set;}
    void Render(IScene scene);
}
A sprite is a graphical element with a background and a border
public interface ISprite : IVisualizable, IDisposable
{
    string Name { get;}
    Font Font { get;set;}
    Color BackgroundColor { get;set;}
    Color StrokeColor { get;set;}
    float StrokeWidth { get;set;}
}

I’ve added the name for debugging reasons.

An artifact is a sprite attached to an agent (e.g. smoke trail, hallow, etc…).
public interface IArtifact : ISprite, IOwned<IAgent>
{}
public interface IOwned<T>
{
    event EventHandler OwnerChanged;
    T Owner { get;set;}
}
The world contains world dimensions and projection from/to screen.
public interface IWorld
{
    Size WorldSize { get;}
    Size ScreenSize { get;}
    PointF ScreenToWorld(PointF point);
    PointF WorldToScreen(PointF point);
    PointF ToWorldRatio{get;}
   PointF ToScreenRatio {get;}
}

Service concepts

I’ll skip the service concepts for now. They basically provide access to the different instances of neighborhood, obstacle manager, agent list, etc... For now, it’s to time have a break and a well-deserved cool beer.

posted on Friday, June 10, 2005 5:03:26 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, May 25, 2005

A while ago, I mentioned I was planning to build an autonomous agent framework on .NET (this was in fact a long time ago). It’s time to wake this project up with a shiny description:

NSteer is a framework for simulating of autonomous agents for .NET.

Until we get down to the code, here is a little snapshot to get you hooked:

 

A bit of history: boids, flocks, herds, etc…

In 1986, Craig Reynolds wrote a little application that simulated bird group behaviors. The term Boids was born. The beauty of his approach was that the “bird” behavior was not created from some complicated mathematical equation, it all boiled down to 3 simple rules that each agent would follow:

  1. Separation: avoid neighbors,
  2. Cohesion: go towards the group center,
  3. Alignment: align with group velocity.

Since then there has been a lot action in this field and you can see applications of this approach in a lot of the blockbuster hits. Check out Craig’s web site for an impressive list of links to papers (I love this one), samples, frameworks and movies…. It was about time to bring .NET into the play.

Laying down concepts

Let’s start by identifying the concepts in our problem that we can later translate into interfaces:

  • An Agent describes an independent entity that lives in a World. An Agent has a Vision, a Body and a Behavior
  • A Body defines the cinematic properties of the agent (position, acceleration, etc…)
  • A Vision defines the region where the Agent can “see” obstacles or neighbors
  • The Behavior of the Agent will yield a steering force that will move the Agent. This is the “brain” of the agent.
  • Agents are confined in a World.
  • A Simulator that uses an Integrator to integrate the dynamics of the system.

We will also need ways to access the list of Agents, Obstacles, etc… To do so, I chose to use the Service – Component – Container pattern where different services give access to the agents, obstacles, etc...

In .Net, this pattern is already part of the framework. The IComponent, IContainer interfaces are defined in System.ComponentModel and there already exists base class such as Container, Component, ServiceContainer to get started.

For a start, we will define the following services:

  • AgentService, gives access to the Agents,
  • WorldService, gives access to the world dimension and properties,
  • ObstacleService, gives access to the obstacles in the world,
  • NeighorhoodService, let agent query about neighboring agents

At last, we need some concepts for visualization:

  • A Sprite is a 2D drawing with background color, foreground color, etc… It can be hidden or made visible.
  • A Scene is an abstraction of GDI+,(just in case I had the courage to implement the framework using DirectX or XAML).

There might be a couple more going down the road, but this looks like a good starting point. I’ll see on the next post how we lay down the interface and maybe get ready for testing.

posted on Wednesday, May 25, 2005 8:35:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]