Using Essence# as a .Net bridge

Eventually, you should be able to use Essence# as a bridge between Smalltalk programs that don’t run on .Net and applications and libraries and services that are native to .Net. That’s not currently possible, because Essence# doesn’t yet support network connectivity of any sort. But the plan is to port Craig Latta’s Flow framework to Essence# to solve that problem.

The idea is that, once Essence# has a fully-functional bidirectional object transport capability (such as the one used by Spoon,) it shouldn’t be at all difficult to establish a “stored procedure” architecture by which Smalltalk applications that don’t run on .Net could send code to Essence# for execution in a .Net environment–and vice versa, of course.

Sending code is a more powerful architecture than just sending remote procedure calls.  In the latter case, you’re limited by the API provided by the server. In the latter case, you can dynamically create your own API as the system runs. Even better, you can send code to be executed in an environment with better locality of reference to the data it needs, and better locality of reference to the functions that need to be applied to that data. It’s the same concept, and same architecture, that Gemstone has used so successfully for may years.

Not My Type: Dealing with CLR types in Essence#

There are three issues that you will encounter when using Essence# that involve CLR types:

  1. Obtaining a CLR type as a value that can be stored in a variable, passed as a parameter or sent messages;
  2. Converting a value from one CLR type to another; and
  3. Creating instances of CLR generic types.

Obtaining A CLR Type As An Essence# Value

The Essence# class System.Type (in the Essence# namespace CLR.System, which corresponds to the .Net namespace System) has many class messages that answer commonly-used CLR types: For example, the expression Type string evaluates to the .Net object that represents the .Net type System.String and the expression Type timeSpan evaluates to the .Net object that represents the .Net type System.TimeSpan. [Note: Those messages are unique to Essence#; the actual .Net class System.Type doesn’t support them. You can add Essence#-specific instance methods or class methods to any .Net class or struct; they just won’t be available outside of Essence#.]

Of course, if you have an instance of the type, you can just send it the message #getType (which is a message to which all .Net objects will respond.) Another way to get a CLR type is to send the message #instanceType to the Essence# class that represents CLR values having that type.

If the CLR type can’t be obtained using one of the techniques explained above, the fallback is to encode the assembly-qualified name of the type in a string, and then send the string the message #asHostSystemType, as shown in the following example:

'System.IO.ErrorEventArgs, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'
        asHostSystemType

In most cases (but not all,) the #asHostSystemType message will fail if the fully-qualified name of the assembly is not provided. Two exceptions to that would be when the type is in the mscorlib assembly or in the EssenceSharp assembly.

CLR Type Conversion

Formally–in other words, in theory–Essence# is dynamically typed: The “type” of an expression is simply the powerset of all messages that can be sent to the expression without causing a method-binding error. And that is also true in practice, in that the compiler permits any message to be sent to any expression, and in the fact that the compiler permits any expression to be assigned to any variable or passed as any argument in any message, and in the fact that typing errors are only discovered and reported at run time.

But that does not mean that any expression can be passed as any argument in any message to any receiver without causing any run-time type errors, even in cases where the only messages that will be sent to the value of an argument won’t result in a method binding error. That constraint does hold in most other Smalltalk implementations, and it does hold when the method that will be invoked by the message is an Essence# method. But it does not hold when the “method” that will be invoked is a CLR method.

In Essence#, a CLR method is just a “user primitive”–although the “primitive” does not always need to be formally defined as a method in the Essence# source code, because the dynamic binding subsystem automatically binds Essence# messages to CLR methods that require less than 2 arguments, to CLR type constructors that require no arguments, and to any non-private properties or fields of a CLR type. And it is generally the case in all Smalltalk implementations that “primitive methods” will fail if the types of their arguments are not what they require–even if the primitive could have performed its intended operation simply by sending (using Smalltalk dynamic message dispatch) the right messages to the argument. Primitives generally don’t send messages to their arguments using Smalltalk dynamic message dispatch. And the methods of CLR types certainly do not.

Fortunately, the Essence# dynamic binding system will–if it can–automatically do the required type conversions for you, such as number type conversions, String/Symbol conversions, and any conversions defined by implicit or explicit conversion operators defined by a CLR type. It will even convert Essence# blocks into CLR functions with the required parameter type signature.

But the Essence# dynamic binding system isn’t magic. It can’t handle all cases. It’s simply not possible to convert any type into any other type. And even when it is, the run time system may not have the information required to do it correctly. And in other cases, the logic to do the required conversion simply hasn’t been implemented, for one reason or another.

CLR Array Types

One common case where a conversion might be possible, but the dynamic binding system doesn’t attempt to do it, involves arrays. From the point of view of the CLR, an Essence# array is an array whose elements have the type System.Object (an instance of the Essence# class Array,) or the type System.Byte (a ByteArray,) or the type System.UInt16 (a HalfWordArray,) or the type System.UInt32 (a WordArray,) or the type System.UInt64 (a LongWordArray,) or the type System.Single (a FloatArray,) or the type System.Double (a DoubleArray,) or the type System.Decimal (a QuadArray) or the type String (a Pathname.)

Unfortunately, there are many methods of CLR types that require an array having elements of some type other than the ones supported by Essence#. If the CLR method parameter requires an array whose element type is not the same as that of the corresponding argument at run time, then the binding to the method will fail.

Fortunately, if the conversion of the array to one with the required type is possible (because all the elements can be converted to the required CLR type,) you can code that conversion yourself in Essence#. The following example shows how (this is not new functionality; it just hasn’t been documented/explained):

| clrSourceArray elementType arrayType argument|
		
"Step 1: Convert the Essence# array into a CLR array:"
clrSourceArray := #('one' 'two' 'three') asHostSystemArray.

"Step 2: Get the desired element type of the array to be passed as an argument:"
elementType := System.Type string.

"Step 3: Create a CLR array type with the necessary element type:"
arrayType := elementType makeArrayType.

"Step 4: Create the CLR array having the correct size and element type:"
argument:= arrayType new: clrSourceArray size.

"Step 5: Copy the elements of the source array into the array that will be used as an argument:"
1 to: clrSourceArray size do: [:index | argument at: index put: (clrSourceArray at: index)].
^argument

Or you could just send the message #asHostSystemArrayWithElementType: to the Essence# array, as in the following example:

#('one' 'two' 'three')
        asHostSystemArrayWithElementType: System.Type string

CLR Generic Types

The CLR has three types of “generic type”: Open generic types, partially-open generic types and closed generic types. An open generic type is also called a generic type definition. An open generic type is one where all of its generic type parameters remain “open” because none of them have been bound to a specific type argument. A partially-open generic type is one where some, but not all, of the type’s generic type parameters have been bound to specific type arguments. A closed generic type is one where all of the type’s generic type parameters have been bound to specific type arguments. For the most part, a partially-open generic type is essentially the same as an open generic type. The distinction that really matters is between closed generic types and ones that have generic type parameters that aren’t bound to a specific type.

It’s possible to create instances of closed generic types, but it is not possible to create instances of open or partially-open generic types. And that can be a problem, because .Net class libraries typically only provide generic type definitions that define open generic types. So, although you can define an Essence# class that represents a .Net type that is an open generic type definition, you won’t be able to use that Essence# class to create instances of the type by sending it the normal instance creation messages (e.g, #new or #new:). Creating instances of such a type requires providing one or more types that will be used as the type arguments to construct a closed generic type from the open generic type defined by the .Net class library.

Fortunately, you can use Essence# to construct a closed generic type from an open generic type definition, as illustrated in the following example:

| openGenericDictTypeDefinition esClass closedGenericType |
openGenericDictTypeDefinition := 
        'System.Collections.Generic.Dictionary`2' asHostSystemType.
esClass := openGenericDictTypeDefinition asClass.
closedGenericType := esClass 
                        instanceTypeWith: Type string 
                        with: Type string.
dict := closedGenericType new.
dict at: #foo 
	ifPresent: 
                [:value | 
                        System.Console 
                                write: 'The value at #foo is '; 
                                writeLine: value
                   ] 
	ifAbsent: 
                [
                        System.Console writeLine: '#foo is not present'
                ].
dict at: #foo put: #bar.
dict at: #foo 
	ifPresent: 
                [:value | 
                        System.Console 
                                write: 'The value at #foo is '; 
                                writeLine: value
                   ] 
	ifAbsent: 
                [
                        System.Console writeLine: '#foo is not present'
                ].

The Essence# class that represents a CLR type can be obtained by sending the message #asClass to the CLR type object.

If an Essence# class represents a generic type (whether the type is open, partially-open or closed makes no difference,) then you can create a closed generic type by sending one of the following messages to the Essence# class: #instanceTypeWith: aCLRType, #instanceTypeWith: aCLRType with: aCLRType, #instanceTypeWith: aCLRType with: aCLRType with: aCLRType ,  , #instanceTypeWith: aCLRType with: aCLRType with: aCLRType, #instanceTypeWith: aCLRType with: aCLRType with: aCLRType with: aCLRType with: aCLRType or #instanceTypeWithAll: anArrayOfCLRTypes.

One good way to handle the case where the .Net type you want to use is an open generic type definition would be to define a subclass of an Essence# class that represents that type, and then use one of the messages above to construct a closed generic type that will be the instance type of the subclass.

Another good way to do it is simply to use the CLR’s syntax for closed generic types when specifying the instance type of the subclass. The following examples show both approaches:

	"Class creation/configuration using the Essence# #instanceTypeWith: message"
	
	| superclass class |
        superclass := 'System.Collections.Generic.List`1, mscorlib'
                                asHostSystemType asClass.
        class := Class new.
	class 
                superclass: superclass;
                instanceType: (superclass instanceTypeWith: System.Type object).

	"Class creation/configuration using the CLR's syntax for closed generic types"
	| class |
        class := Class new.
	class 
                assemblyName: 'mscorlib';
                hostSystemNamespace: #System.Collections.Generic;
                hostSystemName: 'List`1[[System.Object]]'

Invoking .Net methods that have “out” or “ref” parameters

The latest release of Essence# introduces the ability to invoke .Net methods that have “out” or “ref” parameters. This post will explain what those are, why supporting them is a technical challenge, and how to pass such parameters to .Net methods that use them.

Conceptually, there are only two ways to pass parameters to functions: By value or by reference. Although there are (usually rather old or even ancient) programming languages that pass all parameters “by reference,” most programming languages pass parameters “by value” in the default case. And some languages, such as Essence#, pass all parameters “by value.”

The following example will be used to illustrate the difference in the semantics:

var x = 3;
var y = f(x);

If the variable x is passed to the function f as a “by value” parameter, then it will be impossible for the function f to change the value of the variable x. However, if the variable x is passed to the function f as a “by reference” parameter, then the function f will be able to change the value of the variable x (although it may nevertheless refrain from doing so.)

Although there is more than one way to implement the two different ways of passing parameters, conceptually the difference is that, when arguments are passed “by value,” only the value of the argument is passed in to the function being called, whereas when arguments are passed “by reference,” the address of the argument (typically, the address of a variable) is passed in to the function being called. In the latter case, the called function may use that address to assign a new value to the variable–although it need not do so.

In fact, many languages implement “pass by value” semantics by physically passing in the address of the arguments that are passed “by value,” but nevertheless disallowing assignment to function parameters. As Peter Deutsch famously said, “you can cheat as long as you don’t get caught.” However, neither the CLR nor the DLR’s LINQ expression compiler cheat in that way: Unless a parameter is declared to require “pass by reference” semantics, arguments are passed by copying their values into the activation frame of the function being invoked.

So the problem for languages implemented on the CLR, such as Essence#, that have no syntax for specifying that parameters will be passed “by reference,” is that there’s no straightforward way to invoke the methods of CLR types that have any “pass by reference” parameters, where the intended usage is that the called method will, as part of its normal operation, assign a new value to an argument that is passed by reference.

And there’s an additional complication: Some methods that use “pass by reference” parameters will neither need nor use the initial value of the parameter, because they only set its value, and never “read” it. Conversely, others will use the initial value to compute and set a new value for the argument. And yet others may do one or the other with the same parameter, depending on the state of the receiver and the value of other parameters.

The C# language attempts to reduce the ambiguity regarding the usage model of “by reference” parameters by requiring that the method definer use different syntax to declare parameters, depending on whether the method will or will not use the initial value of a “by reference” parameter to compute and then set the argument’s new value. In the C# syntax, a parameter that is passed by reference must have one of two keywords as a prefix: out or ref.

The C# out keyword causes the compiler to require that the method that declares the parameter must first assign a value to the parameter before the parameter can be used–and before the method can return. That makes it impossible to use or access the initial value of the parameter. In contrast, the C# ref keyword causes the compiler to require that whoever invokes the method must first assign a value to the parameter.

But C# is not the only language used on the CLR, and the CLR itself does not enforce any constraints on the usage of “by reference” parameters. A .Net language’s compiler may do so, but such is not required by the CLR. That means there may exist .Net types that have methods that use “pass by reference” parameters that do not abide by the rules of C#.

The reflection metadata provided by the CLR does reveal, for each method parameter, whether it is a “pass by value” or a “pass by reference” parameter.  It may also reveal whether the intended usage of a “pass by reference” parameter is as a C#-style out parameter. But compilers are not required to provide that particular piece of metadata. The parameter may be used as an out parameter even if the reflection metadata makes no such claim. The only information provided by the reflection metadata that must be accurate is whether or not the parameter requires that its corresponding argument be passed “by value” or “by reference.”

So for now, Essence# deals with .Net method parameters that require that arguments must be passed by reference in one of two ways:

If the actual argument at runtime is anything other than a block or delegate, the dynamic binding subsystem will create and initialize a new variable that will be passed as the argument. Unless the reflection metadata claims that the parameter is an “out” parameter, the variable will be initialized to have the same value as the argument specified by the invoking code. The method will then be invoked with that new variable as the argument corresponding to the “by reference” parameter. When the method completes, the value of the created variable that was passed in as the actual argument will be assigned to the expression that represents the originally-specified argument. That assignment may cause an exception, and it may fail to update the original variable. I’m researching improvements, perhaps using the DLR’s Meta Object Protocol.

However, if the actual argument at runtime is a block or delegate, then the dynamic binding subsystem will create and initialize a new variable that will be passed as the argument (just as it does for the other case.) The method will then be invoked with that new variable as the argument corresponding to the “by reference” parameter (still essentially the same procedure as in the other case.) But when the method completes, the procedure is different: The one-argument block or delegate that was the originally-specified argument will be invoked, with the newly-created variable as its argument. That permits the caller to access the value of the “out” parameter, because its value will be the value of the block (or delegate) argument. Note, however, that the block or delegate must have an arity of one, and if it’s a delegate, its parameter type must be assignable from the parameter type of the invoked method’s parameter.

Example Usage

Let’s assume that we want to invoke the method Get as defined in the C# class named ValueModel shown below:

class ValueModel {
        private static readonly Object notAValue = new Object();

        protected Object value = notAValue;

        public bool HasValue {
                get {return value != notAValue;}
        }

        public void Set(Object newValue) {
                value = newValue;
        }

        public bool Get(out Object value) {
                if (HasValue) {
                        value = this.value;
                        return true;
                }
                return false;
        }
}

If we have an instance of the above ValueModel class in a variable named model (in an Essence# block or method,) we can invoke the Get method as follows:

[:model |
        | value |
        (model get: [:v | value := v])
                ifTrue: [value doSomethingUseful]
]

Or more simply, we could just do:

[:model | model get: [:value | value doSomethingUseful]]

Important: If the .Net method being invoked returns a boolean value, then the block will only be invoked if the .Net method returns true. But if the .Net method has a return type of void, or has a return type other than Boolean, then the block will always be invoked.

Acts: New Release Of Essence# Introduces Threads

Essence# was released to the public almost 2 months ago. Since that time, the following features and capabilities have been added: Exceptions, Collections, Traits, Announcements and .Net Events, passing blocks as parameters to .Net methods where the corresponding parameter is a .Net delegate, and defining Essence# classes that represent .Net generic type definitions.

And now the ‘Acts’ release of Essence# introduces support for Threads, (asynchronous) Tasks, Semaphores, Monitors and Mutexes, the ability to call .Net methods that use “out” or “ref” parameters, the ability to convert Essence# arrays into .Net arrays (with any .Net type as the element type,) the ability to create such arrays, and the ability to construct closed generic types from open generic type definitions, and create instances of those types. And it fixes various bugs.

The Acts release can be downloaded from the Essence# site on CodePlex (which gets you the installer.) Alternatively, the code for the run time system (written in C#) can be obtained from theEssence# CodePlex site using Git, and the code for the Essence# Standard Library can be obtained from GitHub. And the example scripts can also be downloaded from the Scripts repository on GitHub.

The Essence# Development Plan

Before getting into the details of the new features and capabilities introduced in this release, I should explain the overall development plan:

The highest priority goes to making the compiler and the run time system work correctly. The next priority is adding whatever features to the compiler and/or the run-time system may be required to a) have a fully-functional programming language that conforms to the ANSI Smalltalk standard to the extent that that’s possible on .Net, and b) to enable the maximal degree of interoperability with .Net that is possible without compromising support for the language’s conformance to the ANSI Smalltalk standard. Next to lowest priority is adding whatever classes or methods to the Essence# Standard Library may be required in order to conform to the ANSI Smalltalk standard. And everything and anything else has the lowest priority.

Essence# will stay in alpha until both the language and the standard class library provide full ANSI Smalltalk compatibility, to the extent that that’s possible on .Net.  It will remain in beta until Essence# code a) can be compiled to .DLLs and .EXEs, and b) can be debugged using the Visual Studio debugger. The second requirement depends on the first–debugging with Visual Studio is simply not possible using the type of  (CLR) methods that the Essence# compiler currently generates (which .Net calls “dynamic methods.”)

Typically, when support for new feature domains is added to Essence# (e.g, exceptions, collections, events, threads, files/streams, etc.,) the first step will be releasing Essence# classes that simply wrap the native .Net classes for that feature domain. In some cases, nothing more will be needed. But in most cases, in order to provide ANSI Smalltalk conformance,  it will be necessary (in later releases) to add more classes and methods to the Essence# Standard Library that provide an ANSI Smalltalk facade over the native .Net classes.

Completing the task of adding Essence# classes that wrap (directly represent) the native .Net classes that are needed to complete the planned feature set of the Esssence# Standard Library has priority over implementing Essence# classes that adapt those classes so that they provide an API that conforms to the ANSI Smalltalk standard. That’s why this release does only the former, and not the latter (with some minor exceptions.) It’s also why the same will be true for the next major feature domain: Streams and files.

Using the Essence# Concurrency Classes

This release implements Essence# classes that represent (“wrap”) many of the core .Net classes for multitasking and concurrency that are defined in the .Net System.Threading namespace, such as Thread, Monitor, WaitHandle, EventWaitHandleSemaphoreSlim, Mutex, SpinLock, SpinWait, ReaderWriterLockSlim and ThreadPool.  It also implements Essence# classes that represent (“wrap”) many of the classes defined in the .Net System.Threading.Tasks namespace, such as Task, TaskScheduler and Parallel.

The new classes reside in the Essence# Standard Library in the namespaces CLR.System.Threading and CLR.System.Threading.Tasks. In a few cases, the Essence# class name is not the same as the .Net class name. Examples: CLR.System.Threading.Semaphore -> System.Threading.SemaphoreSlim, CLR.System.Threading.ReaderWriterLock -> System.Threading.ReaderWriterLockSlim.

Note that these new classes are not defined–and therefore not visible–in the default system namespace (“Smalltalk.”)

Also note that, in some cases, the new classes represent (“wrap”) .Net generic type definitions (“open generic types,”) which means one cannot create instances of such types. To create instances, it is first necessary to construct a “closed generic type” by supplying a generic type argument that corresponds to each of the generic type parameters of the generic type definition. This version of the Essence# Standard Library also includes Essence# classes that do just that for the .Net classes Task`1 (“Task<>” using C# syntax), TaskCompletionSource`1 (“TaskCompletionSource<>” using C# syntax) and ThreadLocal`1 (“ThreadLocal<>” using C# syntax.) Those classes wrap generic types that are constructed by supplying System.Object as the generic type argument to the base generic type definition. They reside in the namespace Smalltalk.Multitasking, and are named Smalltalk.Multitasking.Task (which represents the .Net type Task<System.Object>), Smalltalk.Multitasking.TaskCompletionSource (which represents the .Net type TaskCompletionSource<System.Object>) and Smalltalk.Multitasking.ThreadLocal (which represents the .Net type ThreadLocal<System.Object>).

You should also know that an Essence# class that represents any closed generic type inherits from the Essence# class that represents the corresponding open generic type definition. So the class Smalltalk.Set–which represents the .Net type System.Collections.Generic.HashSet<Object>)–inherits from the Essence# class that represents the .Net type System.Collections.Generic.HashSet<>, and Smalltalk.Multitasking.Task inherits from the Essence# class that represents the .Net type System.Threading.Tasks.Task<> (using C# syntax for the type names.)

If you browse the Essence# code that implements the new classes, what you won’t see is an Essence# method that corresponds to every method and property of the corresponding .Net type. They are there nonetheless, and you can call them in spite of the fact that you don’t see the method definitions in the Essence# code. And that’s true of any and all Essence# classes that represent (“wrap”) a .Net type. If the .Net type represented by an Essence# class has a property or method named “Start,” you can invoke the method or get the value of the property by sending the message #start (or #Start, if you prefer) to an instance of the .Net type, even though there is no such method defined in the Essence# code.

Generally, a method will only be defined in Essence# code when the corresponding .Net method requires 2 or more parameters, or when the type has constructors that require any arguments. Normally, methods in Essence# code only need to be explicitly defined to access a .Net property, or to invoke a .Net method with less than 2 parameters, when it is desired to use a different name for the Essence# message than the one defined by .Net (perhaps because the message selector traditionally used in Smalltalk for the operation with the same semantics is different.)

However, if the class is a native Essence# class which does not “wrap” a .Net type, then almost all methods not inlined by the compiler (or not added dynamically by sending a message such as #addMethod: to the class) must be explicitly defined, with a very small number of exceptions (currently, those would be #doesNotUnderstand:, #ensure:, #ifCurtailed: and #on:do:). The compiler inlines many of the same messages as would be the case in other Smalltalk implementations.

You can print out a list of all the methods defined by an Essence# class (including those that come from traits, or are inherited from its superclasses) by using the ES command line program to run code on the command line (e.g, using the Windows CommandPrompt) such as the following, which will print out the names of all the methods of class Array:

Array allSelectorsDo: [:selector | System.Console writeLine: selector]

Example Of Thread Usage

The methods #newProcess, #newProcessAt:, #fork and #forkAt: have been added to class Block as an initial stab at giving blocks standard behavior with respect to threads (“processes.”) Here’s an example of using blocks to start threads, and of using CLR.System.Threading.Semaphore to schedule threads:

| semaphore|
	
semaphore := CLR.System.Threading.Semaphore new.

[semaphore wait. 
System.Console writeLine: 'Thread one reporting'] fork.

[System.Console writeLine: 'Thread two reporting'. 
semaphore signal] fork.

The #fork message creates a Thread from the block that is its receiver, and starts it running. The first thread immediately begins waiting on the semaphore.  The second thread writes out a message to the console, and then signals the semaphore, which causes the first thread to stop waiting on the semaphore and write out its own message to the console.

And that’s it for now. I’ll publish more posts over the next week that describe and explain the other new features and capabilities, such as calling .Net methods that require “out” or “ref” parameters.

 

 

Appe’s New ‘Playgrounds’: Back To The Future, One More Time

So Wired thinks Apple’s Playgrounds development paradigm is revolutionary?

Not so much: Smalltalk programmers have been coding with analogous capabilities since before Steve Jobs ever saw his now famous demo at Xerox PARC.

The Smalltalk development environment now has close to 40 years of maturity and experience behind it. And its cool features from 1979 still haven’t all been added to the IDEs of other languages, to say nothing of the ones added since then.

Ref: http://www.wired.com/2014/07/apple-swift/

Static Typing And Interoperability

Static typing inhibits interoperability. It does so between class libraries and their clients, and also between programming languages.

The reason is simply because static typing violates encapsulation in general, and improperly propagates constraints that aren’t justifiable as architectural, design or implementation requirements in particular. It does so either by binding too early, or by binding (requiring) more than it should.

The reason that static typing violates encapsulation is because it fails to permit the object itself from being the sole arbiter of its own behavior, and the sole arbiter of the semantics of any operations that may be applied to it. It does that by forcing the programmer to break the veil of encapsulation of the object by directly referencing its class or type as the static type constraint of variables that will be permitted to contain objects of that type, instead of respecting the privacy of that information, which should remain hidden behind the wall of encapsulation that objects are supposed to provide. And then that information is used by the compiler to force the behavior of objects based on the static type constraint of the variables that reference them, as well as the semantics of the operations applied to them, thus violating encapsulation even more profoundly.

As usually implemented by widely-used programming languages, and therefore as typically used by program developers, static typing confuses the distinction between a class and the type that the class implements. And even in an ideal implementation in some language that almost no one will ever actually use in anger, it violates encapsulation by confusing the variables or parameters that reference a value with the class of the referenced value, the type of the referenced value, or (usually) both.

A class is not a type. It’s an implementation of a type.  Therefore, any system built on that false assumption is intrinsically broken at a very deep level. If you write code based on the idea that classes are types, you need to transform your thinking.

And a variable is not the value it references. Therefore, any system built on that false assumption is intrinsically broken at a very deep level.  The semantics of a variable is the architectural role(s) it plays in the algorithm that uses it. It is not the class of the value of the variable that the variable may be referencing at any particular time, nor the type that that class may be implementing at that particular time (the type that a class implements is not necessarily a static property of the class.)

As a concrete example, consider the usage of an array object in a statically-typed language which must hold objects of different types (or different classes, it makes no difference in this case,) and therefore must use Object as the static type constraint for the array’s element type (assuming there is no less-general common superclass.) Putting elements into such an array is easy, and raises no issues.  However, although it is equally easy to fetch elements from such an array, using the retrieved values for other than the most trivial purposes typically requires the programmer to over-constrain the retrieved value: The programmer is forced to constrain such values to being instances of some predefined type (which often also means constraining them to be instances of some predefined class,) before such values can be used in application-specific or domain-specific ways. The programmer must impose such constraints by casting the value to a specific type or class–which will fail if the programmer misjudges the type of the object, even though there would have been no failure had the programmer been allowed to use the value in the operations that were needed without having to cast the value to some specific type (because those operations would be valid for all the elements of the array, regardless of their type or class.)

Typically, one only needs to send one or two messages to values retrieved from a variable or from a collection. One usually does not need to send every message (or apply every operation) defined by a particular type or class. Therefore, requiring that a value must be an instance of a particular type or class is almost always requiring more than is necessary–and usually, far more. Imposing such unnecessary constraints inhibits interoperability because it reduces the generality of the code, reduces the generality of the entities the code defines, and reduces the generality of the the entities that the code can make use of.

And the violation of encapsulation bubbles up to higher-level entities: Classes, modules, packages, assemblies, class libraries and entire frameworks–which is how it inhibits interoperability, not only between class libraries or application frameworks and their clients, but even between different programming languages.

Static typing dramatically increases the coupling between components, because it not only exposes the specific entities that define and/or implement all the behavior of a value, it also requires that clients use precisely and only those specific defining or implementing entities, and typically prohibits the use of functionally/semantically equivalent alternatives that would have worked as desired. This is true even when interfaces are pervasively used as static type constraints (which is rarely the case in practice, and so should in all fairness be dismissed as a moot point,) and because clients must in any case use only the specific interfaces used and/or defined by the component providing the service they’d like to use.

Pervasively using interfaces as static type constraints, although it definitely helps, nevertheless remains a problem a) because interfaces typically require behavior that just is not needed every time one of their instances is used (which is almost always the case,) b) because interfaces defined by third parties–and the classes and methods that use them–generally cannot be changed in any way by their clients, and c) because it is possible (and even likely) that the various providers of “reusable” (ahem) components will separately define what are conceptually “the same” interfaces for the same purposes, which nevertheless won’t be type-compatible even if they have exactly the same names and require precisely all the same operations (messages, operators, etc.)

Why? Because “reusable” components from independent sources that use what are conceptually “the same” interfaces nevertheless cannot easily interoperate with each other, since two (or more!) interfaces that aren’t defined in the same namespace will be classified by the static type system as different, incompatible types, no matter how identical they are otherwise. That means that, for example, the function provided by class library A from source Alpha that transforms a video stream by applying a user-supplied function to the color of each pixel cannot be used with the function that performs the desired color transformation provided by class library B from source Beta, because the two class libraries each use their own “Color” interface, which is not type compatible in spite of having the exact same name and defining the exact same set of required operations with the exact same semantics.

And that’s a short–and incomplete–example of why and how static typing substantially reduces a programmer’s degree of freedom in combining and reusing code components from different, independent sources, relative to what it could be if programming languages used the open-world assumption instead of the closed-world assumption regarding which operations are valid. Operations should be assumed valid until that assumption is proven false, for reasons that are analogous to those that justify the assumption that a person accused of a crime is innocent until proven guilty, or those that motivate the assumption that the null hypothesis is true until proven false by actual experimental observation.

Most of human progress over the past few centuries is directly attributable to discarding the prior obsession with being able to present absolute proofs, and replacing that paradigm with what we now know and love as the scientific method, which is based on falsifiability instead of being based on provability. Our modern technology only exists because we gave up the idea that our models of the world must be provably correct, and instead started using the paradigm of falsifiability, as embodied by the scientific method.

The programming world is in severe need of a Copernican Revolution so that we can discard the ever-more complicated epicycles and contortions that the static typists go through to try to match the generality, reusability and productivity of dynamic programming languages.

Note: You can vote on this essay on Reddit

How To Develop Code For Essence# Using Your Favorite Smalltalk’s Development Tools

Essence# currently provides no support for any sort of GUI. Nor does it provide any code editing or code browsing tools. You might think that that means that the only way to develop code in Essence# is to use generic text editors, but such is not the case.

You could develop–and debug–your code using, for example, Squeak, Pharo or VisualWorks. That can be done because, unless your development environment’s Smalltalk compiler accepts (or worse, requires) non-standard syntax, the Essence# compiler can and will compile it. It will also accept/compile many of the common non-standard syntax extensions, such as qualified (“dotted”) names (VisualWorks, Smalltalk-X,) dynamic array literals (Squeak, Pharo) or even dynamic dictionary literals (Amber.)

So the heart of the issue is the differences in the standard libraries supported by other Smalltalk implementations. But that issue can be addressed by adding Essence# compatibility classes and methods to the environment where you develop your code, and by adding foreign-environment compatibility classes and methods to your suite of Essence# libraries/namespaces when you port your code to Essence#.

I expect a large body of such inter-Smalltalk compatibility code to be developed over time–some by me, and some of it contributed by others. In Essence# such special-purpose code modules don’t have to be installed by default. They can be partitioned into separate libraries that are only loaded when and as needed. So if you don’t need either the VASmalltalk or the Dolphin compatibility library, just don’t load them.

However, there is one obstacle in the path of this strategy: The file-out formats of other Smalltalk development environments are not at all the same as the Essence# class library format. Given that, it would actually be a lot of work to port code from VisualWorks, Squeak or Pharo to Essence#.

Or more correctly, it used to be a lot of work, because I’m using this post to publish and explain a solution to the problem: Code that will file out a class or class category from VisualWorks, Pharo or Squeak into Essence# class library format. The code can be obtained from the Essence# Tools Repository on GitHub. Although at present only those three development platforms are supported, it should not be too hard to figure out how to port one of them to another Smalltalk development platform. If you do so, please contribute it to the community.

The version of the code export tool for each of the three development environments is named EssenceClassLibraryExporter, and each has the same external API–which will be explained below. The one for Squeak is located here, the one for Pharo is located here, and the one for VisualWorks is located here. Warning: Don’t export code that you do not own, or that is not open source, unless you have the appropriate permission(s.) The VisualWorks class libraries published by Cincom are not open source, although the “Contributed” code may be (don’t assume; verify.)

After you’ve filed in EssenceClassLibraryExporter.st into Pharo, VisualWorks or Squeak, you use it by writing code in a workspace and executing it as a “do It.” Here’s what you need to know in order to do it correctly:

There are two key issues: 1) What code should be exported, and 2) what Essence# class library and namespace should it be exported to.

Because of the way the EssenceClassLibraryExporter works, it’s best to answer the second question first.

Selecting the target Essence# namespace

When exporting from VisualWorks, you probably should use the same namespace name in Essence# that you use in VisualWorks. But when exporting from a Smalltalk development environment that doesn’t have (or doesn’t typically use) namespaces, you’re faced with an architectural decision that you didn’t have to make until such time as you needed to export your code to Essence#. You can, of course, simply export your code to the Smalltalk namespace. But best practice would be to define and use a namespace in Essence# that’s specific to your project (unless you are deliberately extending a namespace authored by someone else.)

Namespaces in Smalltalk have always been controversial. There is a vocal and influential subset of the Smalltalk community that opposes them. So I am going to address that issue upfront:

First, I must point out that all ST80-compliant Smaltalk implementations that support “shared pools” actually do have and use namespaces. That’s precisely what a “shared pool” is: a namespace. Shared pools could be used to do all the things that namespaces do, but for the fact that the ST80-style compilers and code browsers generally used (with the notable exceptions of VisualWorks and Smalltalk-X, and soon Essence#) haven’t been designed to make such usage elegant and easy. But they could be.

And Smalltalk classes also act as namespaces, in that they can define “class variables” that can be used to change the meaning of references to variables defined in the SystemDictionary. And “shared pools” can have the same effect. Consequently, having formal support for modern, first-class namespaces adds no code comprehension or interpretation issues that were not already present even in Smalltalk-80 ca. 1983, as distributed by XSIS.

But the bottom line is that computer science long ago learned that different name binding scopes are necessary–and become ever more necessary as the size of the code base increases, and as the number on independent contributors increases. That’s why objects provide their own “namespaces” for their instance variables, so as to keep each object’s set of named instance variables separate and distinct from that of all other objects, even when the names of the variables are the same. It’s why each method defines and uses its own “namespace” of parameters and local variables–and why blocks do the same. The bottom line is that Essence# (and VisualWorks, and Smalltalk-X, and .Net, and most modern languages) provide namespaces that localize and encapsulate references to non-local variables for analogous reasons, to solve problems that are essentially the same, using a very similar solution, justified by issues and logic that are also largely analogous.

To the extent that multiple name binding scopes cause problems of comprehension or interpretation with respect to the meaning of named variable references, they do so universally for named variable references of all types and in all contexts: Block and method parameters, block and method local variables, function parameters and local variables, the fields of records/structs, the named instance variables of objects, the class variables and shared pools of ST80, the static variables of C++, C# and Java, and modules/assemblies/programs that contain multiple classes and/or functions.

If, on balance, programmers and the code bases they write get more benefit than harm from having separate name binding scopes for class variables, instance variables, parameters and local variables, why would that not also be the case for classes?  Is that not one of the key benefits provided by having separate program files in the case of file-based languages, and separate images in the case of image-based Smalltalk systems? If separate name binding scopes are bad when provided by formal, first-class namespaces, why are they not also bad when implemented by having different images, or by having different program files?

For those reasons, opposition to formal namespaces as first-class objects has always seemed to me to be the equivalent of opposition to compilers by old-school assembly-language coders, opposition to structured-programming by old-school FORTRAN and COBOL coders, opposition to OO by old-school procedural programmers and opposition to dynamic typing by those who think a variable is semantically equivalent to the object it references, and/or that a class is semantically equivalent to the type (concept) it implements.

I’ve used VisualWorks with namespaces for a decade and a half, and have never encountered the issues that opponents of namespaces say they fear.  Not once, not ever. That’s one reason it reminds me so much of opposition to dynamic typing.

Get over it. Move on: Formal namespaces as first class objects solve more problems than they cause. The proof is in the experience of those who’ve been using them for one or more decades now, and in the ever-increasing adoption of them in modern programming systems and languages.

Selecting the target Essence# class library

Conceptually, an Essence# class library is an independently-loadable code module. It roughly corresponds to what other languages/platforms call a “package,” “module,” “parcel” or “assembly” (and these, too, were historically opposed by some; thankfully, they “got over it.”)

It’s important to note that an Essence# class library can contain one or more Essence# namespaces, that the same Essence# namespace can occur (be defined and/or modified by) more than one Essence# class library, that namespaces are an essential and foundational aspect of the .Net platform, and that Essence# namespaces were designed to deal with the fact that namespaces are so central to the way that the .Net framework operates.

You partition your code base into one or more class libraries so that your code modules can be loaded when and as needed as logically-consistent and cohesive units of functionality. You partition your code into one or more namespaces in order to isolate the named entities in your code base from unwanted name clashes with other parts of your own code base, or from unwanted name clashes with code from third parties (of which you may have no knowledge or even interest.) That’s two separate concerns solved by two separate mechanisms (“objects,”) which is proper OO design.

Key point:  You probably should not export your code to the Essence# Standard Library, for the reasons explained in the Essence# documentation section on CodePlex, under the heading Handling multiple releases of Essence#. For one thing, it puts your work at risk of being overwritten when you merge with a new release of Essence# using the Essence# installation program. Using your own private Essence# class library avoids all risk of that happening, and provides several other important benefits that I won’t go into here.

Using the EssenceClassLibraryExporter

The following code snippet will export the class named MyClass to the Essence# class library MyClassLibrary in the Essence# namespace MyNamespace, assuming that MyClass is locally defined in the namespace Smalltalk (i.e., that’s the namespace in which the class is defined in the Pharo, Squeak or VisualWorks image you’re using):

EssenceClassLibraryExporter libraryPath: #('MyClassLibrary').
(EssenceClassLibraryExporter 
        exportingTo: 'MyNamespace' 
        from: Smalltalk)
                exportClass: MyClass

By default, the exported code will be placed in the current working directory of your Smalltalk image. So in the case of the example above, the folder representing/implementing the target Essence# class library would therefore be the current working directory of your Smalltalk image, and the target namespace would be implemented/represented by the folder MyNamespace, which would be created as a direct subfolder in the image’s current working directory.

To use a different folder as the target Essence# class library, send the message #libraryPathPrefix: aPathnameString to the class EssenceClassLibraryExporter before exporting your code, as in the following example:

EssenceClassLibraryExporter libraryPathPrefix: 
        '/Users/myUserName/Documents/Developer/EssenceSharp/Source/Libraries/'.

To export to a nested Essence# namespace, use a qualified named (“dotted notation”) for the target Essence# namespace name, as in the following example, which would export to the nested namespace MyRootNamespace.MyNestedNamesapce, at the absolute path ‘/Users/myUserName/Documents/Developer/EssenceSharp/Source/Libraries/MyClassLibrary/MyNamespace/MyNestedNamespace/’:

EssenceClassLibraryExporter libraryPathPrefix: 
        '/Users/myUserName/Documents/Developer/EssenceSharp/Source/Libraries/'.
EssenceClassLibraryExporter libraryPath: #('MyClassLibrary').
(EssenceClassLibraryExporter 
        exportingTo: 'MyNamespace.MyNestedNamespace' 
        from: Smalltalk)
                exportClass: MyClass

To export all the classes in a class category, replace the message #exportClass: aClass as in the examples above with the message #exportClassCategory: aSymbolNamingAClassCategory, as in the following example:

EssenceClassLibraryExporter libraryPathPrefix: 
        '/Users/myUserName/Documents/Developer/EssenceSharp/Source/Libraries/'.
EssenceClassLibraryExporter libraryPath: #('DevelopmentTools').
(EssenceClassLibraryExporter 
        exportingTo: 'SystemTesting.SUnit' 
        from: Smalltalk)
                exportClassCategory: #'SUnit-Core-Kernel'

If you use the Pharo version of EssenceClassLibraryExporter (in a Pharo image, of course,) then Traits will be handled correctly.

Essence#’s Predecessor: Iron Smalltalk

Before Essence#, there was IronSmalltalk.

When I started work on Essence#, on or around 9 Jan 2014, I had never heard of Iron Smalltalk.  That name, of course, would be a rather obvious choice, given names such as “Iron Python,” “Iron Ruby” and “Iron Scheme” for other DLR-hosted languages. And yes, I not only considered using “IronSmalltalk” as the language’s name, I actually did use it initially.

Nor do I believe that either of my two advisers/consultants on the project (Craig Latta and Peter Lount) had ever heard of an actual ‘IronSmalltalk’ either (as a real implementation of the language, and not as a concept.) If they had, they certainly didn’t mention it. And Craig Latta was in the middle of a multi-week visit here with me where I live during the time I started the Essence# project, and I was speaking by phone with Peter Lount on a daily basis about the project (and we’re still doing that.) So both of them would have had plenty of chances to tell me about the “other” IronSmalltalk while I was still using that name for what will now always be known as Essence#.

In any case, by mid February (2014,) I changed the name–not because I had discovered that there already was an IronSmalltalk,  but because I didn’t want to have the word “Smalltalk” in the language’s name (but this post isn’t about that, so I’ll explain why I came to that view some other time.)

Fast forward to late May 2014: That’s when I discovered that there was an actual Smalltalk implementation based on the DLR other than mine, and that it was named IronSmalltalk.

Even now, I don’t know all that much about it. I haven’t browsed the code other than to have looked at the folders and file names using the CodePlex source code browsing applet.

Why not? Partly because I’m just too busy implementing Essence#, partly because it’s almost certainly way too late to make any significant architectural changes to Essence# based on whatever I might learn by reading Todor Todorov’s code (he’s the author,) and partly because I just don’t want to plagiarize it (even though it’s open source.)

Most of what I know about IronSmalltalk’s design and implementation I learned just by watching the video of Todor Todorov’s ESUG 2011 presentation on IronSmalltalk. From that, I can tell that the author of IronSmalltalk a) knows what he’s talking about, b) made some of the same architectural decisions I did, but c) did some things rather differently.

For example, IronSmalltalk uses CLR Strings as the direct implementation of Smalltalk Strings, but Essence# does not. The reason is because CLR Strings are intrinsically immutable. Yes, the ANSI Smalltalk Standard requires that String literals be immutable–but that’s not a problem for Essence#, because any Essence# object can be made immutable. And Strings have traditionally been mutable in Smalltalk. Interestingly, IronSmalltalk and IronPython both decided to adopt CLR Strings as their native String objects, while IronRuby and Essence# both decided to implement their own Strings. Although Essence# can and does use CLR Strings also (and I assume IronRuby does the same.)

Just so you know: The primary reason that Essence# doesn’t use CLR Strings and doesn’t use CLR arrays as its “native” implementation of Strings or of Arrays is because the #become: primitive is intrinsically not implementable on the CLR. By implementing both Strings and Arrays as a wrapper over native CLR arrays, much of the pain of not having a #become: primitive is eliminated: The double indirection makes it possible to resize Strings and Arrays “in place” without needing to use #become:. The fact that Smalltalk Strings have traditionally been mutable was a secondary consideration.

Another example involves the DLR’s dynamic binding protocol: IronSmalltalk does it the way the DLR documentation strongly recommends, and Essence# does not. So what’s the recommended way, and why doesn’t Essence# do it that way?  Glad you asked:

The Canonical DLR DynamicMetaObject Protocol For Binding Abstract Operations To Concrete Behavior

The officially recommended (“canonical”) protocol for dynamic binding using the DLR’s DynamicMetaObject Protocol can be expressed as follows:

1. Each operand of an operation (e.g., a message send, although in other languages there are many other possibilities, because most languages do so many things using special syntax) is asked to provide a DynamicMetaObject to act as its agent for participating in the DLR’s meta-object protocol for dynamic binding. If the object is unable to do so, a default DynamicMetaObject is created for it (sort of like a court-appointed public defender.)

2. The DynamicMetaObject that is the agent for the object that is responsible for dynamically (at run time) determining the semantics of the operation is asked to bind the abstract operation (whatever that may be) to a specific physical implementation. Note that identifying which object that is–or perhaps which set of objects–is the responsibility of each DLR-based language’s dynamic binding subsystem.

For example, in the case of most OO languages, the DynamicMetaObject that is the agent for the object that is the receiver of a message would be asked to provide a binding for that message–in other words, an invocable function and associated “binding restriction.” A “binding restriction” is a predicate that can be evaluated to discover whether the binding (the invocable function) is still valid, given the current set of operands. Typically, if the “type” or “class” of the “receiver” isn’t the same as it was when the binding was computed, then the binding is no longer valid and must be recomputed.

3. If the DynamicMetaObject is able to provide a binding, then that binding will be used as the physical implementation of the abstract (logical) operation, for as long as the binding restriction predicate says that the binding is still valid.

4. However, if the DynamicMetaObject of the object nominally responsible for defining the semantics of an operation is not able to provide a binding (e.g, what’s the semantics of the DLR-standard “DeleteMember” abstract operation when the receiver is a Smalltalk object, or of the DLR-standard “Invoke” abstract operation when the receiver is null?), then the responsibility for providing a binding is redirected to the dynamic binding subsystem of the language that compiled the code that’s undergoing dynamic binding. This “host language” (to use the term the DLR uses) may be able to provide a binding, or it too may fail.

5. If the host language can provide a reasonable binding based on its own rules and semantics, it will do so. Otherwise, as a last resort, the binding that will be used would typically report or raise an error (although the DLR imposes no such requirement.) Note that different host languages may bind the exact same abstract operation on the exact same object differently: For example, one language might bind any operation applied to null by providing an invocable function that raises the NullReferenceException, whereas another language might instead provide a method it finds in the method dictionary of its UndefinedObject class, and yet another language might provide a function that’s just a no-op.

So that’s the standard binding protocol, and that’s what IronSmalltalk, IronPython and IronRuby do. And it’s what most languages do when they use the DLR’s DynamicMetaObject Protocol.

Of course, the point of that canonical (“officially recommended”) protocol is to ensure that objects have the right of first refusal in determining the concrete semantics of the abstract operations applied to them. And that’s a commendable and worthy goal of any programming language and/or execution environment.

But it’s not what Essence# does.

The Essence# Dynamic Binding Protocol

1. The first step of the Essence# dynamic binding protocol is identical to that of the canonical protocol. It pretty much has to be: The code that implements it is part of the DLR itself. Although it would be possible to circumvent it but still use the DLR’s DynamicMetaObject Protocol for dynamic binding, it would be a lot more work, and would be much more likely to break if the DLR’s API is ever changed.

2. If the receiver of the message is a native Essence# object, then the object’s class will be asked for the method whose selector matches the message that was sent. If the class has such a method (either in its own local method dictionary, in the method dictionary of the trait or composite trait it’s using, or if its superclass can provide a matching method,) then that method will be used as the binding’s invocable function, and the receiver’s CLR type and Essence# class will be used as the binding restriction (actually, the version ID of the class will be used, because the method might be removed from the class at some time in the future.) If no matching method can be provided by the receiver’s class, then the binding subsystem will ask the class for its #doesNotUnderstand: method. If it can provide such a method, then that will be used. Otherwise, it will provide a function for the binding that directly raises the MessageNotUnderstood exception (so defining the #doesNotUnderstand: message is entirely optional.)

3. If the receiver of the message is not a native Essence# object (i.e., it’s an instance of some non-Essence# CLR type,) then the Essence# Object Space for that execution context will be asked to find or dynamically construct the Essence# class that represents instances of that CLR type. That operation cannot fail–a new Essence# class will always be defined, if it does not already exist. And there can be only one such class for each CLR type in any particular Essence# Object Space. The object’s class will then be asked for a method that matches the message selector, which happens in the same way as it would for a native Essence# object. The only difference is what happens if the class cannot provide a matching method:

4. In cases where the Essence# class for a foreign object does not have a method that matches the message selector, the DynamicMetaObject of the receiver will then be asked to provide a binding.  If it is able to do so, then that binding will be used.

5. Otherwise, if the receiver’s DynamicMetaObject fails to provide a binding, the Essence# dynamic binding subystem takes over the responsibility for doing so. And that’s where much of the magic happens which enables Essence# to interoperate so smoothly and effectively with languages that aren’t primarily based on the DLR (i.e, C#, F#, VisualBasic, etc.) That’s also where you’ll find the bulk of the code that implements the Essence# run time system.

So that’s what Essence# does to implement dynamic binding. The remaining question is why it does it that way, even though it’s not what the authors of the DLR strongly recommend.

The reason is both quite simple, and quite profound: The difference between the standard naming conventions of different programming languages in general, and the very stark differences between the naming conventions of Smalltalk-like languages and every other programming language in existence.

You see, the DLR’s protocol is based on the fallacy that the names of operations are the same in different languages, and based on the false assumption that what one language does by using a syntactical construct other languages also do by using some syntactical construct. Neither of those assumptions hold universally even between languages that use the same operation naming syntax, such as identifier + [“(” + {argument} + “)”] + “;”. But of course, Smalltalk-based languages don’t even share that meta-syntax with other languages, let alone the names themselves. And Smalltalk-based languages do almost everything by sending messages, and almost nothing by using special syntax, such as appending “()” to indicate “invoke a function” or prepending “(typeName)” to convert a value from one type to another.

So that’s why the first step Essence# takes in finding a binding when a Smalltalk message is sent to a foreign object is to ask the object’s Essence# class to interpret the message: It’s an Essence# (Smalltalk) message, not a message whose name or syntactical form is highly likely to mean anything to the objects of any other language.

What could or should the objects of any other language do when asked to find a binding for messages such as #displayOn:at:, #~~, or even just #value? Who could reasonably expect a CLR delegate with no arguments to know that it should respond to that message by invoking itself?

Worse, message (method) names such as #~~ and #displayOn:at: aren’t even legal in other languages. Even if they were, they aren’t likely to have the same semantics, because different languages usually have different standard libraries which use different names for the same thing.

Yes, Microsoft’s standard languages for .Net all use the same standard library-but that’s a special case. The DLR was intended to be used by dynamic languages with pre-existing standard libraries that are quite different than .Net’s BCL, and was not intended to be used by languages designed and implemented by Microsoft as native residents of the .Net framework. It therefore was a mistake to design the CLR’s dynamic binding protocol based on that false assumption. And that’s true without even considering the difference between Smalltalk’s keyword message names and the naming conventions used by all other languages.

The bottom line is that the Essence# dynamic binding protocol handles both the difference between Smalltalk’s message name syntax and also the difference between Smalltalk’s traditional and canonical message name semantics and that of the .Net BCL, and does so in a way that minimizes the risk that a message name will be misinterpreted just because it accidentally matches the name of a foreign function.

Essence# also provides a solution to the inverse problem of Essence# (Smalltalk) objects going on excursions to the homelands of foreign programming languages, and having foreign operations applied to them. And I admittedly haven’t documented that yet. Nevertheless, this post is already too long, so I’ll leave that topic to another time.

The Big Event: New Release Of Essence# Adds Full Integration With .Net Events

The new release, dubbed ‘Trumpet-2’, completes the work started by the previous release (Trumpet) by adding full integration with .Net Events. And ‘full’ means just what it says:

You can write Essence# code that adds and removes event handlers to any .Net event.

You can write Essence# code that raises any .Net event.

Also, blocks have been extended so that they can act just like .Net multicast delegates, and they can also be added to the invocation lists of any .Net delegate. That means that you can even publish blocks to the .Net world as either delegates or even as events, because .Net code can add “handlers” to an Essence# block as though it were a .Net event (although .Net 4.0’s “dynamic” type must be used.)

These new features/capabilities remove the last obstacle to building a GUI in Essence#.

The Trumpet-2 release can be downloaded from the Essence# site on CodePlex (which gets you the installer.) Alternatively, the code for the run time system (written in C#) can be obtained from theEssence# CodePlex site using Git, and the code for the Essence# Standard Library can be obtained from GitHub. And the example scripts can also be downloaded from the Scripts repository on GitHub.

Essence# Blocks As .Net Multicast Delegates

Essence# blocks have always been implemented using .Net delegates, although a block instance is not itself a .Net delegate. Instead, an Essence# block encapsulates a .Net delegate. Briefly, a .Net delegate is an object-oriented function pointer, in the same sense that a .Net (or Essence#) object is an object-oriented data pointer: You can’t access or mutate the value of the pointer as an address value (e.g., a number), but that’s what it is “under the hood.”

What’s different about .Net delegates with respect to their analogs on other platforms is that a delegate can point to more than one function at a time–provided they all have the same parameter signature. When a delegate is invoked, all the functions in its invocation list will be executed. Whether that’s a good feature to be built into a language (or in this case, into a run time system) is an interesting topic, but that’s water under the bridge: It’s the reality of the .Net environment, and full interoperability with .Net requires that languages implemented on the .Net CLR support it.

Although the current documentation does not discuss this much (or at all,) it is usually possible to use a “raw” .Net delegate in Essence# code as though it were an Essence# block–and vice versa. Until now, one major exception to that has been adding and removing functions to the invocation list of a .Net delegate.

As of this release, you can set or get the delegate encapsulated by an Essence# block by sending it the message #function (to get the delegate) or #function: aDelegate (to set the delegate.) You can also now send the message #+= aBlockOrDelegate in order to add the function (or functions) of aBlockOrDelegate to the invocation list of the delegate encapsulated by the block that is the receiver of the message. And you can now send the message #-= aBlockOrDelegate in order to remove the function (or functions) of aBlockOrDelegate from the invocation list of the delegate encapsulated by the block that is the receiver of the message.  Note, however, that in either case the receiver must be an Essence# block.

Additionally, the message #+ aBlockOrDelegate can be sent to a delegate: It answers a new delegate that combines the invocation lists of the two message operands (the receiver and the message argument.) And the message #- aBlockOrDelegate can be sent to a delegate: It removes the invocation list of the argument from the receiver.

Using .Net Events

To subscribe to a .Net event, send the message #onEvent: eventName do: aBlockOrDelegate to the object that defines the event (it must be a native CLR object, not an Essence# object.)

To unsubscribe from a .Net event, send the message #onEvent: eventName doNotDo: aBlockOrDelegate to the object that defines the event (it must be a native CLR object, not an Essence# object.)

To raise a .Net event, it is first necessary to add a user primitive to the Essence# class that represents the CLR type of the object that defines the event, as in the following example:

protocol: #events method:
[## invokeTickEvent: eventSource args: eventArgs

        <invokeEvent: #tick>        
]

Note that the number and order of the arguments of the Essence# method selector must match the order and number of the arguments of the .Net event handler type defined for that event by the .Net type that defines the event. The symbol (or String, or identifier–any will work) that follows the “invokeEvent:” keyword must match the name of an event defined on the .Net type whose instances will receive the message defined by the “invokeEvent:” primitive.

Once there is such a primitive, the event with which the primitive is associated can be raised by sending the message defined by the primitive (which in the case of the above example, would be #invokeTickEvent: theEventSource args: anEventArgs) to an object whose type defines the event (and whose Essence# class defines that primitive.) Note that one would usually send such a message to the same object that is passed in as the first message argument (“theEventSource” in the example.) It would usually be a good idea to define a “helper” method in the same class that simplifies the API for raising the event, such as in the following example:

protocol: #events method:
[## announceTick 
	
        self invokeTickEvent: self args: self newTickEventArgs
]

With such a helper method defined, the event can be raised simply by sending the message #announceTick to the event source. Obviously, there would also need to be an implementer of the #newTickEventArgs method, written either in Essence#, or in whatever language implements the .Net type that defines the class/struct that defines the event.

Integration of .Net Events With Essence# Announcments

In order to provide a bridge bewteen .Net events and Essence# Announcements, the following methods have been added to CLR.System.Object:

protocol: #'events-subscribing' method:
[## when: anAnnouncementClass do: anAction
	
	^self onEvent: anAnnouncementClass eventName do: anAction
];
	
protocol: #'events-subscribing' method:
[## when: anAnnouncementClass doNotDo: anAction
	
	^self onEvent: anAnnouncementClass eventName doNotDo: anAction
]

By default, the #eventName message answers the name of the Announcement class that receives it (or the name of the class of the receiver, if the receiver is an instance of an Announcement class.) However, subclasses of Announcement may override that implementation as needed.

Conclusion

Here’s some example code that you can actually run (using the ES command-line tool) to see .Net events being raised and handled by Essence# code:

| metronome handler |
metronome := Examples.Metronome new.
handler := [:metronome :eventArgs | CLR.System.Console writeLine: 'Tick!'].
metronome onEvent: #tick do: handler.
metronome announceTick.
metronome onEvent: #tick doNotDo: handler.
metronome announceTick

The C# class EssenceSharp.Examples.Metronome has been made a permanent part of the Essence# distribution in order to be used as an example and in unit tests. The Essence# class that represents it can be found at Standard.lib \Examples\Metronome. And a new script named Events.es that contains the above code has been added to the list of example scripts.

The next release will introduce “Traits” into Essence#

Seminal paper on Traits: Traits: Composable Units of Behaviour.

Traits are even more useful, and more needed, in Essence# than they would be in a typical programming language because of Essence#’s primary mission: interoperability with other languages, and especially with the .Net Base Class Library.  That’s because the .Net BCL was designed and implemented for statically-typed languages, where it is often either impossible or greatly inconvenient to use a common inheritance hierarchy for conceptually similar constructs. Traits provide the right mechanism to rectify that flaw. They also make it easier to provide an identical API for both native Essence# objects and for those provided by the .Net BCL.