New Feature
The latest version of go (1.8) has a feature I have been waiting for a really long time, plugins. Now, this feature has currently some limitations, present in the docs:
- It only works on Linux (which is fine by me, given what I want to do)
- A plugin is only initialized once, and cannot be closed.
- It has to be a main package.
And I also discovered a couple of papercuts of my own:
- Does not work with weird folder names, I had a plugin in
/plugins/v0.1/plugin.go
and when opening the plugin there was a Symbol not found error. - Does not like interfaces as exported variables, I had to resort to export my desired code as a Function instead of a Variable because it was not working with an interface. The Symbol was being exported but I was not able to typecast it to the interface back.
Now, why do I like plugins so much? They help in content delivery and division of tasks, how so? like this:
Suppose you have a large enough set of sufficenlty individual features, each feature managed by a different team, with different patch version cycles.
The current trend is to manage this scenario as miroservices, but this is not always possible, which leads us to the proposed scenario.
The typicall aproach to this is to have each team work on features, in feature branches perhaps, on the same repo and release a micro or patch version with each fix/change.
Plugins allow us to take this a step forward:
Before we begin
All the code for these samples can be found in a functional example here.
Define your types as a contract:
The one thing that needs to be set in stone between patch versions are the types, changing them can affect things like persistence or communication formats.
In the example repo I have not chosen a very good name for the types package, I used contract in this case you can see a very basic interface for my basic Plugin:
// Plugin represet a valid plugin.
type Plugin interface {
IsAcceptable() bool
Version() string
}
Implement the type in a separate repo
In this case I used the same repo, I implemented a couple of plugins, one that I will deem valid and the other invalid.
package main
import "github.com/perrito666/blogpost_goplogins/contract"
// ShowcaseElement is the variable used as the plugin entry
// point.
var ShowcaseElement contract.Plugin = &plugin{"0.1"}
// Showcase returns the current ShowcaseElement.
func Showcase() contract.Plugin { return ShowcaseElement }
type plugin struct {
version string
}
// Version returns this plugin instance version.
func (p *plugin) Version() string {
return p.version
}
// IsAcceptable returns true if this plugin is acceptable for our very
// demanding criteria.
func (p *plugin) IsAcceptable() bool {
return true
}
// This is just because this has to be a main package.
func main() {}
Now, this code is clearly completely useless and is designed to showcase a bit the idea behind plugins.
You can see that Showcase
is there just to be able to export ShowcaseElement
properly since I wanted
plugin type to be private and only export what is defined by contract.Plugin
. Trying to access ShowcaseElement
was not possible
as it could not be properly typecasted so I created an accessor function, the end result suits me well.
In the repo you can find an invalid version that has only one very small and dumb variant, the IsAcceptable
method returns false. The idea behind this is that you should write code that can diagnose if a new release is suitable for the code, this allows for more combination of main calling code and plugins but also increases the possibility of untested combinations running in production which is not good at all.
One very nice use case here is, for instance, unattended distributed systems where you can find factors like different network topologies or underlying hardware be the ones used as the validation criteria to pick a plugin.
The final part of this puzzle is the loader, you can have a defined path for plugins and watch it for change prompting a restart of a running process to allow it to re-load the plugins. You can easily controll the caller (such as upstart or systemd) by exiting with a status that will signal the daemonizing service that this was an exit caused by new plugins. Having the exit controlled by your own process and not by external scripts allows for cleaner shutdowns.
Now, to complete an example here is a very simple and plain main that will load the latest suitable plugin version.
For this example, the plugins code is also present under the same repo in plugin_sources
and can be built by calling build_plugins.sh
inside of plugins directory.
func main() {
// findAvailableVersions returns a list of the .so files in version decreasing
// order, this assumes that you named the plugins with the versions, of course.
// The plugin code is quite brittle and based on naming.
pluginVersions, err := findAvailableVersions(pluginFolder)
if err != nil {
fmt.Println(fmt.Errorf("failed to list plugins: %v", err))
}
var acceptedPlugin contract.Plugin
var ok bool
// Iterate versions from newer to older.
for _, version := range pluginVersions {
// Try to open each but dont stop on failure since an older one might work.
p, err := plugin.Open(path.Join(pluginFolder, version))
if err != nil {
fmt.Println(fmt.Errorf("showcase plugin is not available: %v", err))
continue
}
// Lookup for the actual Accessor.
e, err := p.Lookup("Showcase")
if err != nil {
fmt.Println(fmt.Errorf("showcase element is not present: %v", err))
continue
}
// Basic Sanity check that the plugin has the right types (too basic though
// the types could still be wrong).
var pluginAccessor func() contract.Plugin
pluginAccessor, ok = e.(func() contract.Plugin)
if ok {
acceptedPlugin = pluginAccessor()
// Run our validation code.
if acceptedPlugin.IsAcceptable() {
break
}
}
}
// No suitable version found, this is a good moment to actually panic.
if !ok {
fmt.Println(fmt.Errorf("no suitable plugin version found"))
return
}
// If this was an actual daemon this would be a loop of sorts.
fmt.Println(fmt.Sprintf("Found newest valid plugin version: %q", acceptedPlugin.Version()))
}
The above makes a nice showcase of what we can do, we now can have a main runner and different teams sending plugins to be deployed. The size of binaries distributed will be smaller, assuming yo have proper segmentation and isolation in your codebase. The possibility to rollback to older versions is also a big win in this case, with the right resiliency code in place you can have a very robust service that does not fall on failed migro upgrades.
Again, take this with a grain of salt, this is a proof of concept of a very new feature put together as a simple showcase, I have not looked into the underlying implementation, the tradeoffs in terms of performance or memory or any other in depth analisys.