A very small update this time, although in a way it’s one of the more substantial changes to the direction of the software.
In the past, when you double clicked an ordinary discrete node, a dialog would pop up with the probabilities in the CPT. This still happens, but the numbers are now actually labelled and you can edit them. There’s no validation there at the moment, so if you enter nonsense, it will still try to run the network with whatever you’ve entered. What you edit will also get saved to file — and again, if you enter nonsense, who knows what can happen! (Actually, it will probably just result in a file you can’t open.)
One thing I’ve implemented here is that as soon as you click ‘Save’, the network will try to run the inference again. This will no doubt be annoying for large networks, but for small networks it tightens the ‘try it and see’ loop. I’ll put in a setting at some later stage to control this.
- Added headers, parent state combination names to CPT view (double-click to view)
- Allow editing of CPT
For a little while, I’ve been wanting to add support for Netica .dne files, and I’ve managed to finally land something. The support is very crude (even cruder than for GeNIe .xdsl files) and probably very flaky. There’s only support for basic networks with discrete chance nodes — no support at all for deterministic nodes, equations, decision nodes, utility nodes or dynamic nodes (as there is for GeNIe files).
I included another one of my side-projects in order to get this support, namely a parser. I ought to write a separate post about the parser at some point, but in short, it’s meant to be a very quick way to use a grammar (with embedded regular expression support) to transform an input source into not just a syntax tree, but something that more closely represents the final desired object model. It’s not quite there yet, but it’s certainly good enough to do the parsing work needed here.
Anyway, here’s the 14th release of Make-Believe:
- Initial Netica .dne support for basic networks (discrete chance nodes only!)
- Update beliefs immediately on BN loaded from query string
I finally have workers performing inference as quickly as on the main thread. This means workers now actually speed up the inference, with some substantial speed improvements as you move up to 4-8 cores. In fact, updating speed is no longer simply comparable to GeNIe, but can run quite a bit faster — OK, you should take that with a large grain of salt, because it’s BN-dependent, I’m not testing if the output variance is the same, and I’m just testing with the same number of sample iterations.
One nice side-effect of this is that I can finally unify the main-thread and worker-thread implementations. This may encourage me to clean the code organisation up a little bit. I hope.
2 workers are enabled by default for now. To change this number, go to the ‘Debug’ menu. It’s best to set the number equal to or slightly lower than the number of CPUs/cores that you have.
- Made worker-based belief update run as quickly as main thread update
- (The solution was to recreate BNs on the worker side, rather than pass in via structured clone. I’m unaware as to why this is needed, since the structures (TypedArrays, etc.) were identical as far as I could tell. This was the case on both Firefox and Chrome, so it’s obviously something I missed.)
- Enabled worker-based belief updating by default
- Added a ‘Debug’ menu
- (Mostly just performance-testing options.)
- Performance tweaks to continuous variables
Ack! The previous version I posted had broken the ability to enter evidence, and due to an extended period of busyness, I didn’t notice until now. (No regression tests yet — I do this for fun.) Oh well, thankfully no-one is paying attention.
It’s also worth pointing out how limited the overall support is at the moment. You can’t enter evidence into a net with continuous nodes. You can’t use the continuous nodes with worker threads. The Normal sampling is very crude. The distributions that are displayed are just histograms up to a maximum of 10 bins. I also haven’t thought about performance yet. As I’ve noted, I do this because I enjoy it, so I’m not rushing things — there’s no need to scoff down the whole dessert at once.
From now on, I’ll put each release in its own folder, so that there’s at least a working version to go back to if I break things again! Here’s Release 12:
- Fixed entering of evidence
- Improved robustness of saving a bit (note: can’t save continuous nodes at all yet, and probably many other types of node)
- Initial support for continuous nodes
I’ve added a very crude view of node CPTs — just double-click a node. (No support for deterministic nodes yet.) I expect I’ll be improving this in the next iteration. Also, moving nodes around is no longer quite so awful, with arrows updating as you move.
- Arrows update continuously as you move nodes
- Added viewing of a CPT by double-clicking node
I’ve added a menu bar and it’s starting to look a bit more like an application now. I suppose that’s a good thing.
- Added menu bar
- Very slight performance tweak
Another small update, this time adding a control in the toolbar for the number of iterations that the likelihood weighting algorithm performs. At some point I’m going to need to clean up the toolbar.
- Control to allow changes to the number of iterations of the inference algorithm
I missed the R8 release from two weeks ago — partly because I forgot, and partly because I was busy — so here it is now. Sadly, there isn’t too much to excite in this release. The main change is the ability to do automatic layout using the dagre.js library. I haven’t tweaked the defaults at all, so the layouts it creates right now aren’t ideal.
- Auto layout using the dagre.js library
The work on R6 allowed me to add an experimental worker thread implementation in this release. Unfortunately, I’m seeing absolutely no performance gain at this stage from it on a multi-core system, so it’s off by default. No other changes in this release.
- Added experimental worker threads (currently worse perf)
For this version, I focused on making some improvements to the backend, disentangling the .xdsl format from the internal BN representation. I also added a crude and experimental save to .xdsl function — expect it to work for basic networks only, and even then less often than not. Here it is:
I’ve also finally put the code on GitHub.
- Backend rewrite to remove dependencies on .xdsl format for BN representation and display
- Extremely experimental .xdsl saving (basic networks only)