Hi there!
It’s been a decent while since we got a devlog out, since we’ve been kept busy with playtesting, polish, bug fixes, and general business endeavours. A lot of the things that has happened since last deserves dev logs of their own, to be honest — but today I’m going to talk a little about a subject that is very important to Mechanical Sunset, and really almost any game: dialogue!
It has been clear to us from the beginning that Mechanical Sunset will require a fairly advanced system for handling dialogue. Robots in the City should feel alive and responsive, react to the player’s actions, offer advice and hints, and generally behave in a humanrobot-like manner.
Devlog 17

Fredrik “Olaxan” Lind
Speech Therapist
A lot of people are familiar with the concept of a dialogue tree: a branching set of character replies, and options for the player to choose between. A dialogue tree allows you to offer dialogue conditionally, meaning robots can be programmed to only say certain things based on the state of the current scene. A common example in the MechSun demo is robots commenting on the state of a nearby light — the player might have turned it off.
Here is an example of one such tree, as implemented in the current demo.

Users who are familiar with Unreal may cringe and shudder at the sight of this, for reasons I’ll soon explain.
This is the logic of TUTIS’ idle commentary, which appears over his head when the player approaches.

A Selector node will check its children from left to right, continuing with the first one that reports success. It can be used to output dialogue conditionally.
A Sequence node will simply execute its children in order (left to right), aborting if any of them fails.
This simple logic has allowed us a lot of advanced world interactions with relative ease!
For instance, AMSTRAD the robot noticing if the player has turned off his ceiling light, but fixed the power to the area — or TUTIS offering what is pretty much a step-by-step solution to the first power puzzle, if prompted
So what’s bad about the above?
Well, Unreal devs will notice that the images depict a Behavior Tree.
Behavior Trees are common in game engines to provide a designer-friendly interface for AI programming.
They are meant to provide an intuitive logic flow for creating agents that can chase the player, investigate noises, mill around randomly — the sort of thing we’re used to from open-world games.
They are NOT meant for dialogue!
There are a few reasons for this, and I’ll try to explain a few.
This is a DIALOGUE TREE. It is the result of us porting all dialogue to the frankly excellent dialogue plugin NotYetDialogue for Unreal.

If there’s a takeaway to be had from this devlog, it’s that plugin. Remember the name for whenever you find yourself in a situation that demands any sort of dialogue, because it is excellent.
Now, what’s the difference between that graph, and the Behavior Tree?
The main difference is that a Dialogue graph allows for multiple connections to one node. As you can see in the picture above, several dialogue choices feed into the central node, and the dialogue proceeds from there.
That is not allowed in a Behavior Tree. Now, this might seem like a massive downside (and it is!), but using Sequences smartly will allow for a very similar behavior — so no deal-breaker yet.
Another difference is that nodes in Behavior Trees are meant to be more or less independent of one-another. This led to challenges for us, where we could not visualise on the dialogue UI which dialogue paths had been taken; which would end the dialogue, etc. There was no real knowledge of what was coming next in the tree.
So obviously Behavior trees are a bad choice for dialogue. Why did we ever go with them to begin with?
The reason are twofold!
1. A very popular tutorial that will come up when you search for these things will lead you down the path to ruin.
2. It’s actually quite powerful to have dialogue and AI behavior tightly coupled!
With dialogue baked into the AI behavior tree, we can take full advantage of Unreal’s AI Tasks, such as moving, animating, and modifying AI state; directly from the dialogue.
This setup, for instance, allowed us to rotate a robot to point at a specific door in the scene, using AI Tasks.

We could have robots being told to walk somewhere, and obey. We could have robots commenting on their own AI actions by having them speak through their “ambient” dialogue (the widget above their heads).
Now, is this type of behavior impossible with a more loosely coupled dialogue-AI relationship? No, certainly not. It was simply nice while it lasted, and made it more difficult to identify the fact that the whole approach was a fairly bad idea.
I’m happy to end this devlog by telling you that transitioning our dialogue to the new system was a neat and quite quick affair, and that the new plugin will allow us MUCH more flexibility going forwards.
I would like to again mention the name NotYet Dialogue (https://gitlab.com/NotYetGames/DlgSystem), because the plugin deserves the recognition. It is robust against error, flexible, and designed in a manner that fits most games. It is highly recommended.
And if there’s a takeaway to be had apart from that, perhaps it’d be that even a system that seems good in a lot of regards can be undermined by grievances that are hard to fix, and that one shouldn’t be afraid to take a few steps back and recognize problems sooner rather than later. I liked our previous method of doing dialogue, but the new way is simply better in all regards.
A lot having changed under the hood, the UI has received only minor touch-ups.

Well, porting the dialogue may has been painless but also mind-numbingly dull and repetitive, with no less than 34 trees (including ambient commentary) having been migrated manually.
With that out of the way, I am more than happy to say: Me, the Gumlins, and the robots wish you a good weekend!
See you next time!
Hope you’re having a splendid day and for more…

There’s no tree i can’t climb.
~ The Gumlin