Writing PowerShell Core AWS Lambda Functions – Part I

Writing PowerShell Core AWS Lambda Functions – Part I

Overview

AWS Lambda support for PowerShell Core is here! In this series of blogs, we’re going to be a taking a dive into writing one of these Lambda functions in PowerShell Core. To make it a bit more fun, we’ll be connecting this function with Lex , which will in turn be connected to a custom Facebook page to drive an interactive AWS PowerShell help facility.

Our Goal

By the end of this series of blogs, we’ll have in place a Facebook page which, by sending a message containing an AWS PowerShell command to, will provide an overview of what the command does. It will do this by forwarding the request to a Lex ‘bot, which will parse it, and then provide the input criteria to our PowerShell Lambda function. Our function will then lookup the documentation page for the command, extract a summary, and provide a JSON formatted response back to Lex, which in turn will feed the result back to the Facebook chat channel. Phew!

How PowerShell Processes Lambda Input & Output

First though, let’s take a look at how a Lamda function written in PowerShell works.

As covered in a previous blog, when a Lambda function is invoked, up to two parameters are passed: context, and input object. The context object simply contains details of the Lambda environment, whilst the input one contains event details, such as S3 information relating to when the CreateObject action occurred. For languages such as Go, you need to write an event handler for processing of the above information, an example of which is in the blog previous to this one.

This months announcement from AWS of support for PowerShell Core 6.0 with Lambda and new tools for this was quite a step in the evolution of PowerShell, and introduces another way for processing of events. In a packaged PowerShell script for Lambda, the value of the parameters passed into the Lambda function are made available via predefined $LambdaInput and $LambdaContext. There is no need to write method or function handlers. Particularly of note though is that $LambdaInput is automatically cast into PSObject, making parsing and processing of the information a lot simpler. The handler does not need to know in advance the JSON schema and neither do you need to go through reflection hell and mappings to identify it. PowerShell does it all for you. There is no need to even use the ConvertFrom-JSON cmdlet.

For returning output (if so desired), strings are processed as is. Non returned string types are dynamically cast to JSON prior to them being handed over to the recipient. The choice is yours.

Prerequisites

In order to develop this solution, we need to have the following already in place

  • AWS account
  • Facebook developer account
  • Facebook app
  • .Net Core 2.1 SDK
  • PowerShell Core 6.0
  • AWSLambdaPSCore Module

AWS Account

I’m going to assume that you already have the first in the list. If for whatever reason that’s not the case, you can sign up for a Free Tier account via the following link:

Creating a Facebook Developer Account

You have the option of either converting an existing Facebook account to a developer one, or alternatively creating a new, developer specific, Facebook account. This is necessary in order to allow our application to access both Lex, and internal Facebook API’s.

Full instructions for doing this are at: https://developers.facebook.com/docs/apps/

Creating Our FaceBook App & Page

Once you have carried out the previous step, we need to register an application and page which will connect to our forthcoming Lex ‘bot. You can find full instructions https://developers.facebook.com/docs/messenger-platform/getting-started/quick-start

In your browser, go to https://developers.facebook.com/

  • Click Log In
  • Enter the Facebook credentials that are associated with your development account
  • Click Log In

  • Click My Apps
  • Click Add New App

  • On the Create a New App ID, enter AWS PowerShell Help for Display Name, and your own email address for Contact Email
  • Click Create App ID

  • Follow any instructions if prompted for a Security Check
  • You’ll be taken to an Add a Product screen. Locate Messenger, and click Set Up

  • Scroll down to Token Generation
  • Click Create a new page
  • In the Community or Public Figure page, select Get Started

  • Page Name : AWS PowerShell Help
  • Category: Computers & Internet Website
  • Click Continue

  • On Add a Profile Picture, select Skip

  • On Add a Cover Photo, select Skip

  • After a couple of seconds, our AWS PowerShell Help page will be created.
    Go back to Token Generation for our Messenger settings
  • Select our AWS PowerShell Help page
  • Select Continue as … when prompted to allow the application to receive your name and profile picture

  • The next dialog is to do with allowing your application to act on your behalf. Click OK to authorize.

  • When the screen returns to Token Generation, it will now have a Page Access Token. Record this information for later use

  • Now go to Settings, Basic
  • Click Show in the App Secret dialog, and enter your password when prompted
  • The App Secret will be displayed. Also record this information for later use

Conclusion

At this point, with the exception of one setting which needs to be carried out after we have created our Lex ‘bot, our pre-requisites for our Facebook app are in place. In the next blog, we’ll get our development environment setup for generating our PowerShell function.

Thanks for reading! Feedback welcome!

Share

Beginnings in Golang and AWS – Part VII – Events, Lambda and Transcribe (cont’d)

Introduction

In today’s post, we’re going to be doing the fun part of putting everything together to get our project into actions. We’ll be uploading our code to S3, creating our Lambda function, and then creating an event subscription that will trigger the function when we’ve uploaded an mp4 file to the bucket being used. As well already are aware, this should then begin processing of the file and create a transcription using Transcribe.

Uploading our Code to S3

When creating a Lambda function that runs Go code, we need to provide a zipped file of the Go code. This can either be carried out via an upload of the file on your development system, or alternatively by uploading the zip file to S3 and providing the location information.

Of the two options, the latter is the most flexible for us since we can update our code and upload a new zip file to this location without needing to change any configuration.

Note that I’m doing these steps on OSX and using Bash. For Windows systems you may need to use slightly different context.

Ensure you’re using the current Github release

Change the current directory to ./src/transcribe within the project folder.

The Lambda code runs on Linux, so we need to ensure when compiling the code that the compiler knows this (via GOOS=linux). We also specify the output file to be main


Now, we can create the zip file

An inspection of the contents of the directory should show a new files, main, and the zip file of it, main.zip.

Uploading the File

Now we want to upload the file. You can either do this manually from the AWS console, or if you want can use the utility we created earlier in the series (as below)

Now, if you login to the AWS console, and take a look at the S3 bucket, main.zip will be there. Click on the file to get its properties and copy the URL under Link into the clipboard. We’re going to be using this in the next step.

Creating the Lambda Function

From the AWS console:

Click Services on the black bar, and then Lambda

Click Create Function

Now, in the Author from scratch section, enter the following values.

  • Name : transcribe
  • Runtime: Go 1.x
  • Role: Create new role from template(s)
  • Role name: transcribe_role

Click Create function

The Designer window appears, featuring the transcribe function, and with a role already defined to allow access to Cloudwatch.

Next, we tell Lambda where to get the code.

In the Function code section, select:

  • Code entry type : Upload a file from Amazon S3
  • Runtime: Go 1.x
  • S3 link URL : <paste the link that you copied into the clipboard  in the previous steps>
  • Hander : main

Create an S3 Trigger

Now that the basic function is in place, we need to configure it for our specific needs. its As previously mentioned, we want it to run when a new .mp4 file is created in our S3 bucket.

On the left hand side, click S3

This will add it onto the console, and a Configure triggers dialog will appear.

Change Suffix to .mp4 and select Add

Click Save

Give the Lambda function access to Transcribe

We need to give transcribe_role permissions to access amazon transcribe in addition to S3 and Cloudwatch

  • Click Services, IAM
  • Click Roles
  • Click transcribe_role

  • Click Attach policies
  • Find and put a check next to AmazonTranscribeFullAccess
  • Attach Policy

With this complete, the policy is now attached to the role.

A return to the Lambda function will also now show Amazon Transcribe on the right hand side, indicating it has permission to access this service.

Upload our movie

We’re ready to test the functionality out! Let’s find an mp4 video file and upload it. In my case, I’ve a file, movie.mp4, which is going to be used.

As before, you can choose either to manually upload the file via the AWS console, via the AWS CLI, or using the Upload program we created earlier.

The Results

The function should kick in pretty much as soon as the file has finished copying to S3. Let’s have a look from the console by going to Machine Learning, Amazon Transcribe.

And it’s there. We’ve now got an end-to-end mp4 to transcript file.

Conclusion

In this post, we’ve created our Go package, uploaded it to S3, setup the Lambda function, made an event subscription for when an MP4 file is uploaded to our bucket, configured the role associated with the function to allow it to use Transcribe, and verified its operation.

At this point there are further steps we could think of.

  • During the time of putting this series of blogs together, AWS added CloudWatch events for Transcribe. We could write another Lambda function, which ran once either a Transcribe completed or failed event occurred. Using this, we could do things like notify us when a job has completed, or even do something like converting the output to .srt format.
  • Add an endpoint and some code to allow us to query the status of one of the jobs.
  • We could even look into using a completely different way of getting a file into S3, such as passing a link to an MP4 file on a website and getting the Lambda function to download the file and store directly in S3 prior to creating the job.
  • Via the above, we could also look at adding in additional event sources, such as via API Gateway.

There’s lots and lots of possibilities, and a forthcoming blog series will cover one or more of these.

thanks for reading! Feedback always welcome. 🙂

cheersy,

Tim

Share