Making a Telegram Bot with GoSparta



When I started with this, my main goal was to learn about AWS Lambda. I researched a bit about the best approach, if you wanted to use go was to use some framework, which gave me an extra chance, to try mweagle’s go sparta which I have been wanting to try for quite some time (and not only because he works with me).

The choice of a subject for the short exercise was random, I considered that the webhook api for Telegram was a good fit for a lambda function driven example and I wanted to make a silly bot to see how hard it would be so, good chance to try both.


This is basic, it creates a bot that replies in a very simple way to simple inquiries, it is not on the scope of the exercise to make the bot do all the functionality I originally had planned. Further posts will deal with more complete functionality. Completely outside of scope is a bot that actually can send messages without being triggered by a request, this set of examples will only contain a reactive bot.

Some knowledge about aws various services and configuration is assumed, this could be done without it but the parts not explained could prove a small headache.

Building the bot

Setting up AWS

We can start from AWS setup since the Telegram parts are not required until testing actually happens.

For this you will need an s3 bucket, it can be private.

We also will need to create an AWS role with certain permissions for this exercise, based on the go sparta FAQ we can determine which permissions are required, this is a working policy, you could most likely tailor it a bit more to only work on certain objects but for me it was enough: (replace YOUR_BUCKET_HERE with your bucket name.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
            "Resource": [

This should provide all the settings we need to succesfully run the exercise.

Environment variables

To ease development and keeping secrets secret, we will use some env variables to store the data for AWS login and other artiffacts that our lambda function will need on compile time but do not need to be stored in the code.

I have a small bash file that I source before running gosparta and it sets all I need, the AWS variables will be picked by sparta and the rest by our code once we write it.

export AWS_ACCESS_KEY_ID=<your role access key>
export AWS_SECRET_ACCESS_KEY=<your role secret access key>
export AWS_REGION=<your aws region>
export NIANCULBOTAPI=<your telegram api key once you get one>
export S3BUCKET=<the bucket you just created>

Creating the Bot Code.

Before starting here is the sample code used here with some extra goodies

The message

The first thing to take in account is the kind of message we will obtain from telegram and how that is going to be wrapped.

A lambda function is basically a piece of code that runs in someone else’s infrastructure and context. In order to be able to make our lambda function behave like an HTTP endpoint, we will need to use an APIGateway, basically a gateway between the edge of the infrastructure and your functions (and other objects) more detail on how to set this up will follow, but for now, let’s just keep that idea in the back of the head.

The Gateway will Wrap the message obtained from the HTTP request (Telegram sends a POST) and give it to us de-serialized inside an object, assuming our lambda function has the right recipient.

To receive the message we will craft a Mixin of and tgbotapi.Update like the following example:

type TelegramRequest struct {
	Body tgbotapi.Update `json:"body"`
Notice that the tgbotapi.Update is assigned to the Body struct field and it has a corresponding serialization field. Ideally you will assign to body any de-serializable type that can hold whatever is being sent in the body, for this case we are lucky since Telegram always sends tgbotapi.Update objects and those are very well tagged in the library.

The lambda function itself

Our lambda function will be a regular go function that takes a context.Context and a TelegramRequest as parameters and returns a string and error. The string will be ignored by telegram but is useful when testing with curl and the error will be useful as Telegram will use failure as a retry indicator.

func chatty(ctx context.Context, 	gatewayEvent *TelegramRequest) (string, error) {

the first thing to do is to try to obtain a logger, logs will be output to CloudWatch and four our didactic case also very useful to peek at what is being sent.

	logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger)
	if !loggerOk {
		return "cannot get a logger", nil

Next we need a telegram bot API Client, to be able to reply, contrary to what one would think, the response to telegram is completely ignored by it so, if you want to reply to the user, you will need to instantiate a new client and send a message.

	bot, err := tgbotapi.NewBotAPI(TelegramBotAPI)
	if err != nil {
			"Event": gatewayEvent,
		return "cannot create new bot", err

And now the core of the functionality, we extract the Update from the message Body and try to determine what it is (commands and messages seem to be a convention in telegram library).

You can explore a bit more what is being done here in the repo, but basically we try to parse the message, act accordingly if it’s a command and give a quick example of the difference between a message and a Reply, there is more that can be done like identifying of the channel is indeed a group chat or a personal one and which, along with a few other things.

	u := gatewayEvent.Body
	message := fmt.Sprintf("I don't know what to make of: %q 🤷‍♀️", u.Message.Text)
	isReply := true
	if ok, command, args := isCommand(u.Message.Text); ok {
		message, isReply = handle(command, u.Message.Chat.UserName, args)

	msg := tgbotapi.NewMessage(u.Message.Chat.ID, message)

	if isReply {
		msg.ReplyToMessageID = u.Message.MessageID

	return "", nil

Uploading the bot to AWS

Now the fun part, the best of it is that, thanks to go-sparta, we declare all the Upload using go.

The comments in this section are all by @mweagle you can find the whole file here

func main() {

Create a new API Gateway stage that’s eligible for a deployment. A stage is a snapshot of the public routes available for an API-G deployment

	apiStage := sparta.NewStage("v1")

Create an API Gateway RestAPI resource and associate it with the deployable stage


	apiGateway := sparta.NewAPIGateway("NianculBot", apiStage)

This allows the URLs to be accessed via AJAX Requests

	apiGateway.CORSOptions = &sparta.CORSOptions{
		Headers: map[string]interface{}{
			"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key",
			"Access-Control-Allow-Methods": "*",

Transform an AWS go-compliant lambda signature into a deployable Sparta struct. This struct allows us to associate the lambda function with the API Gateway URL resource

	lambdaFn := sparta.HandleAWSLambda("telegram",

Create an API Gateway resource that routes /v1/chat to our lambda function. This associates an API Gateway Integration request with the target lambda function.


	apiGatewayResource, _ := apiGateway.NewResource("/chat", lambdaFn)

Once the integration request is established, define the specific HTTP methods available on that request path. Our bot only responds to POST. It also only returns two different status codes (200, 500). Reducing the set of eligible HTTP status codes returned from the function call reduces the set of regular expressions applied to the response body. This improves performance, reduces the provision time, and minimizes the overall CloudFormation stack size.


	apiMethod, apiMethodErr := apiGatewayResource.NewMethod("POST",
	if nil != apiMethodErr {
		panic("Failed to create /chat resource: " + apiMethodErr.Error())

To minimize the number of API Gateway Mapping templates and the overall size and time to provision of our stack, we’ll limit the API Gateway route to only accept application/json data provided over an HTTP POST


	apiMethod.SupportedRequestContentTypes = []string{"application/json"}

Create the slice of lambda functions that define this service

	lambdaFunctions := []*sparta.LambdaAWSInfo{lambdaFn}

Create a uniquely named CloudFormation stack for this service. This utility function allows multiple developers to provision the same service in a single AWS account

	stackName := spartaCF.UserScopedStackName("NianculBot")

Delegate to Sparta to handle cross compiling, packaging, and managing the service.

		"Core of the Niancul Chat Bot for Catering Barbecues",

The telegram part

To create a bot simply follow the instructions in here and then enter the API token in the shell variable mentioned before.

To make this variable go into our code without being committed with it we will use ldflags which are passed to go-sparta when running the provisioning step.

In our code we will simply create a string variable var TelegramBotAPI = "" and the rest will be done in the invocation

Putting all together

To make things easier we will add all the invocation to a Makefile but you could very well just use a shell file or make the invocation from the shell yourself.

export S3BUCKET := $(S3BUCKET)

.PHONY provision:
	go run main.go provision --ldflags "-X main.TelegramBotAPI=$(NIANCULBOTAPI)" --s3Bucket $(S3BUCKET)

Basically we use go run and pass the main.go file (and others if involved) and then the parameters for go-sparta that, if all is correct, will upload the lambda function to your aws account. Notice how --ldflags is passed -X to replace the variable we set up earlier with the contents of the Shell one.

Letting telegram know.

If you were succesful in the previous step, you should have seen, among other information, the following line:

APIGatewayURL Description="API Gateway URL" Value=""

Copy the Value on that and use it for the following command

curl --request POST --url${NIANCULBOTAPI}/setWebhook --header 'content-type: application/json' --data '{"url": ""}'

Notice we added the /chat endpoint which is the resource we set for our lambda in the API Gateway.

Now we are ready either talk directly to the bot or add it to a group and enjoy adding more commands and re-provisioning.

comments powered by Disqus