The first users would be individuals who learn some complex domain (e. g. a professional practice, or a field of science) and want to make their thinking in this domain quicker, more reliable, and more economical energetically. In other words, these are the same people who Andy Matuschak targets with mnemonic medium: the mnemonic medium should be developed in a context where people really need fluency.

The tool helps learners to deliberately build and garden their mental maps (Reference frames) in a certain domain, heavily leveraging natural language models and applying natural language inference to overcome the failure weaknesses of the earlier attempts at building such tools.

In terms of the interface, I think the tool should be like a visual graph modeller and explorer, similar to TheBrain tool, rather than a text editor. However, I think that the tool should impose two important limitations:

The tool should be probabilistic and leverage natural language models heavily

I think a big drawback of existing ontology editors is that they are too formal and not probabilistic and imprecise enough. The concepts in the proposed tool should be nebulous by default: a concept in a tool doesn't have a single fixed title (unlike evergreen notes, whose titles are like APIs). When a learner adds a new concept using a particular word or phrase, the tool automatically expands it into a cloud of words and terms (using a language model), and coalesces (or probabilistically attaches) the concept with already existing concepts: e. g., if the learner added "public transport" and "bus" in different reference frames, they should automatically attach to each other with some probability.

The tool should find a very fine balance between the relative formalism/strictness of ontologies (every reference frame should be ontologically consistent) and the fuzziness of associations, nebulosity of concepts. (Here, I adapt Levenchuk's idea with regard to why formal architecture modelling tools lose to more informal tools such as coda.io.)

The tool could also use the language model to automatically choose the phrase or the word which is most appropriate to denote the concept in a particular reference frame view.

The tools could also leverage the language model (perhaps, primed with the books and other written materials on the domain being studied, for example, a web of evergreen notes) to suggest to the learner new nodes in reference frames:

li-ion-cell.svg

When the type of relationship is not explicitly specified by the learner (for example, on the reference frame above, it is not explicitly said that the relationship between "Li-ion cell" concept and the features is functional-part-of) the tool may derive it automatically using its language model and annotate the relationships in the navigation interface.

The tool may also suggest to the learner to add new reference frames to existing concepts: "You have added a functional breakdown of a Li-ion cell, would you like to add constructive breakdown now?"

It's perhaps not within the reach of the state-of-the-art natural language models existing today to suggest the learner how to optimise the concept structure, extract new concepts (and perhaps even suggest new original words for these concepts, using some morphological models!), or coalesce concepts, but in five years from now, AI should definitely help human learners to improve the structure of their mental reference frames.

At present, the tool should at least try to make reorganising the topology of concepts as easy as possible, automate it: I think that "structure ossification" is a big issue with most existing knowledge management, spaced repetition, and note-taking systems. Perhaps, the probabilistic nature of the tool could be helpful here.

Authored sets of reference frames (mental maps) in a mnemonic medium

Many types of prompts which Andy Matuschak describes in "How to write good prompts: using spaced repetition to create understanding" elucidate different parts of conceptual reference frames: many "simple fact", conceptual prompts, and all list prompts invite the learner to recall features of some reference frame (and procedural prompts train sequence memory, which the proposed new tool could support, too, but discussing it is out of scope of this post).

Therefore, it seems that a mnemonic medium with a good set of prompts presupposes a relatively well-defined set of reference frames (in the mind of the author of the mnemonic medium), but stops short of trying to impart these reference frames to the learners more explicitly than "by example", i. e. via prompts and spaced repetition of these prompts.

I don't see a particular value in striping learners of the ability to explore the reference frames more directly, once they finished writing a mnemonic medium or a course. So, the proposed tool may be integrated with mnemonic mediums: it could be loaded with a set of reference frames which the learner should "fill in"/"unlock", like in a computer game.

Spaced repetition