Hierarchical Temporal Memory
Hermine Gayman laboja lapu 2 mēneši atpakaļ


Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence know-how developed by Numenta. Originally described in the 2004 ebook On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used as we speak for anomaly detection in streaming data. The expertise relies on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (particularly, human) brain. At the core of HTM are learning algorithms that can store, study, infer, and recall high-order sequences. Unlike most other machine learning methods, HTM continually learns (in an unsupervised course of) time-based patterns in unlabeled knowledge. HTM is robust to noise, and has high capability (it could possibly be taught a number of patterns simultaneously). A typical HTM network is a tree-formed hierarchy of ranges (to not be confused with the "layers" of the neocortex, as described beneath). These ranges are composed of smaller elements referred to as regions (or nodes). A single level in the hierarchy probably contains several areas. Increased hierarchy ranges often have fewer regions.


Higher hierarchy levels can reuse patterns discovered on the lower levels by combining them to memorize more complicated patterns. Each HTM region has the identical primary perform. In studying and inference modes, sensory information (e.g. data from the eyes) comes into backside-stage regions. In generation mode, the underside stage regions output the generated sample of a given class. When set in inference mode, a area (in each degree) interprets information arising from its "little one" regions as probabilities of the categories it has in memory. Each HTM area learns by figuring out and memorizing spatial patterns-combinations of input bits that often happen at the same time. It then identifies temporal sequences of spatial patterns which can be more likely to occur one after one other. HTM is the algorithmic element to Jeff Hawkins’ Thousand MemoryWave Community Brains Theory of Intelligence. So new findings on the neocortex are progressively incorporated into the HTM model, which changes over time in response. The new findings do not essentially invalidate the previous components of the model, so ideas from one technology are usually not necessarily excluded in its successive one.


During coaching, Memory Wave a node (or area) receives a temporal sequence of spatial patterns as its enter. 1. The spatial pooling identifies (within the input) frequently noticed patterns and Memory Wave memorise them as "coincidences". Patterns which can be significantly similar to each other are handled as the identical coincidence. Numerous attainable enter patterns are lowered to a manageable variety of identified coincidences. 2. The temporal pooling partitions coincidences which might be more likely to comply with each other in the coaching sequence into temporal teams. Every group of patterns represents a "cause" of the enter sample (or "name" in On Intelligence). The concepts of spatial pooling and temporal pooling are still fairly vital in the present HTM algorithms. Temporal pooling is not yet well understood, and its which means has changed over time (because the HTM algorithms evolved). Throughout inference, the node calculates the set of probabilities that a pattern belongs to every known coincidence. Then it calculates the probabilities that the input represents each temporal group.


The set of probabilities assigned to the groups is known as a node's "perception" in regards to the enter pattern. This belief is the result of the inference that is passed to a number of "dad or mum" nodes in the following higher level of the hierarchy. If sequences of patterns are much like the coaching sequences, then the assigned probabilities to the groups won't change as usually as patterns are acquired. In a extra normal scheme, the node's perception can be despatched to the enter of any node(s) at any stage(s), however the connections between the nodes are still fixed. The upper-stage node combines this output with the output from different youngster nodes thus forming its personal input pattern. Since resolution in area and time is lost in each node as described above, beliefs formed by greater-stage nodes characterize a good bigger range of space and time. This is meant to replicate the organisation of the physical world as it's perceived by the human brain.