LSWVST Performance

LSWVST compiles to Assembly Language. This is traditionally known as JIT-compiling. We claim that LSWVST is faster than C. Why we can do such a claim ? Here we will uncover a few secrets why LSWVST can perform faster than C.

Traditionally Smalltalk has one Object-Memory which is garbage-collected. This introduces overhead and complicates things in multihreading programs. LSWVST introduces multiple Object-Spaces, in which garbage collection can be controlled by the programmer. Every thread has its own Standard-Object-Memory avoiding synchronisation problems.

Traditionally in Smalltalk every object is boxed. An object is a pointer which has a reference to the class. This is necessary to dynamically lookup the method during resolving a message-send. This makes programs which operate on huge data-collections of the same type rather inefficient.

LSWVST introduces a new object-type - Iso-Collections. Iso-Collections stores type information of its elements only once.

Traditionally Smalltalk translates message-sends in a method-lookup in a cache. Tricky implementations uses a polmorphic-inline cache which means patching the translated Assembly-code with the address of the last method-address. This introduces two problems:

The resulting code is self-modifying and cannot be put in to a ROM. The resulting code must synchronize the self-modification in multithreaded programs - which is rather inefficient.

LSWVST message-send implementation is same as fast as the Virtual-Table mechanismn in C++.

LSWVST allows method annotation to further optimize compilation to native code, we call this technique adaptive native compilation.