# ECS Explorer: AWS containers without the stress

If you’ve ever used Amazon’s AWS console you’ll know that it could do with some improvement. It’s easy to get lost in the myriad menus and there are many annoying bugs. It’s also slowwwwww.

I’ve heard that this is no accident – Amazon wants the community to provide the tooling for them. This ties in well with the fact that the API AWS is actually pretty good.

I decided to save myself some time and produce one of these community solutions. I decided to focus on ECS (EC2 container service) as this is the service I interact with the most. ECS is AWS’s way of managing Docker containers running on EC2 instances. Some of the terminology can be a little confusing, but overall it’s a good product.

I built ecs_explorer to cover some of the annoying tasks I do on the console; looking for small pieces of information in a given container/service. For example, I might need the IP address of the container that’s running a certain task. In the console, this can take >20 clicks, in ecs_explorer I can find this in 10 secs.

You can find ecs_explorer on GitHub. The main aim is to get you to the information you need as quickly as possible. It supports navigation of all ECS objects, viewing important details as well as viewing the full JSON if you need more information. One of the key bugbears with the web console, filtering resources, is much improved. Another advantage over the console is that there’s no need to login.

Installation is easy. Once your credentials are available via env variables or a dot file ( see here for more information), it’s just a case of running pip install ecs_explorer. The CLI is now available to run from anywhere with ecs_explorer -h showing the available options.

Give it a try and tell me what you think!

# What I install on a new computer

I recently got a new work computer (a 15″ MacBook Pro) and was given the choice of copying across the image from my old one or starting afresh. It’s always good to take stock of what software you are using so I chose the second option. Here’s what I installed:

## Homebrew

From my point of view, the most essential piece of software that you can install on MacOS. It’s a package manager that installs command line software as well as GUI applications. It gives you a way to install software in a generic way and from a reputable source, upgrade it when a new  version comes out, and a way to keep track of what you’ve already installed. Unless stated, all the following software is installed using it.

## ITerm2

The Terminal Client that comes with MacOS is not very good and it doesn’t need to be because ITerm2 exists. It is easy to configure, contains a multitude of shortcuts and helpful features and just gets out of your way.

## Fish

I spend a lot of time on the command line and I’m not a huge Bash fan (the default shell that is installed in MacOS). Fish feels intuitive, and easy to learn. Function are easy to write, autocompletion works better than you could expect and you can always switch back to bash if you need to run a specific Bash script.

## Oh-my-fish

Configuring Fish manually is an intimidating task. Oh-my-fish (named after the Zsh equivalent) gives you a set of packages and themes to install. Themes are the most interesting feature for me. I use the bobthefish theme to display interesting information on my shell prompt. It tells me what directory I’m in, what the return status of the last command was and has git integration too.

## IntelliJ IDEA Ultimate

IntelliJ is a Java IDE but calling it that does it a disservice. It’s also a terminal client, a database client, and a workflow manager. The Java coding experience is brilliant and it makes huge projects easy to browse. It does have it’s downsides: it eats memory and CPU and doesn’t cope brilliantly with generated source code.

# Using static code analysis tools with Jenkins Pipeline jobs

I’ve been using Jenkins 2 more and more in my projects and I’ve found that the Pipeline feature is a great fit for the jobs that I am writing. However, the documentation isn’t fantastic for the older plugins. One example of this is static code analysis tools such as PMD, CheckStyle and FindBugs. In old Jenkins, there would be a screen to configure how the results of these plugins affected the results of the build:

Obviously, in a Pipeline job, you have to configure this via code rather than on a web page, but it’s not documented how (that I could find). I wanted the build to be UNSTABLE if any warnings were generated. After a bit of digging, I figured out it should look like this:

step([$class: 'hudson.plugins.checkstyle.CheckStylePublisher', pattern: '**/target/checkstyle-result.xml', unstableTotalAll:'0'])  PMD and FindBugs are very similar as they are based on the same Publisher class. step([$class: 'hudson.plugins.checkstyle.CheckStylePublisher', pattern: '**/target/checkstyle-result.xml', unstableTotalAll:'0'])
step([$class: 'PmdPublisher', pattern: '**/target/pmd.xml', unstableTotalAll:'0']) step([$class: 'FindBugsPublisher', pattern: '**/findbugsXml.xml', unstableTotalAll:'0'])


The more complicated configuration options that are listed on the web page above can be found in this class. All the static code analysis tools extend HealthAwareRecorder and so the configuration is the same. Use the setters to push configuration via properties in the step definition. For example, if I wanted to set the healthy threshold to 20 and the unhealthy threshold to 100, then by looking at the class, I find

    /**
* Returns the healthy threshold, i.e. when health is reported as 100%.
*
* @return the 100% healthiness
*/
@Override
@CheckForNull
public String getHealthy() {
return healthy;
}

/**
*/
@DataBoundSetter
public void setHealthy(final String healthy) {
this.healthy = healthy;
}

/**
* Returns the unhealthy threshold, i.e. when health is reported as 0%.
*
* @return the 0% unhealthiness
*/
@Override
@CheckForNull
public String getUnHealthy() {
return unHealthy;
}

/**
*/
@DataBoundSetter
public void setUnHealthy(final String unHealthy) {
this.unHealthy = unHealthy;
}


so I would use

step([\$class: 'hudson.plugins.checkstyle.CheckStylePublisher', pattern: '**/target/checkstyle-result.xml', healthy:'20', unHealthy:'100'])


An example of this can be found on my GitHub account here.

Clearly this is more complicated that just filling in the values on the web site, so it will be interesting to see if a snippet generator pops up for this.

# Using the HP IDOL OnDemand APIs to enrich unstructured data

Unstructured data is all around us (emails, log files, facebook posts, twitter statuses) yet it’s difficult to analyse. HP has a product that does some common analysis tasks for you. It’s called IDOL (Intelligent Data Operating Layer) and comes with an easy to call web API (IDOL OnDemand https://www.idolondemand.com). In this post I’m going to look at how to call a few of the text analysis endpoints from a Java program.

## Insightful tweets

Companies need to understand how their product is viewed and a useful way of discovering this is analyse posts on social media. We can get a rough measure of whether a user thinks positively or negatively about it by performing sentiment analysis on the post. So, let’s build an application that looks at tweets on a given subject and displays the sentiment along with some other information.

## Endpoints

The three endpoints we will use are

## Calling from Java

The endpoints are called using one http GET with parameters attached. Here’s how we do this in Java.

First we create the URL. Let’s take the sentiment analysis call as an example. The three things that we need to provide (as shown by the documentation) are

• API key (your personal identifier)
• The text to analyse
• The language of the text

We create a request object

public class SentimentAnalysisRequest {
private static final String REQUEST_STRING = "detectsentiment";
private static final String VERSION_STRING = "v1";
private final ProcessRequestType type;
private final String identifier;
private final SentimentLanguage language;

public SentimentAnalysisRequest(ProcessRequestType type, String identifier, SentimentLanguage language) {
this.identifier = identifier;
this.type = type;
this.language = language;
}

public String toUrlComponent(){
return REQUEST_STRING+ "/" + VERSION_STRING + "?" + type.urlSegment + "="+ UrlUtils.urlEncode(identifier)
+ (language==null? "" : "&amp;language="+language.name());

}

public enum SentimentLanguage {
ENG,FRE,SPA,GER,ITA,CHI
}
}


and collapse it to a URL string (toUrlComponent). One thing to notice here is that we need to encode the text to analyse so that we can send it in a URL.

Given text of ‘I like fish for dinner’ the URL ends up looking like this:

https://api.idolondemand.com/1/api/sync/analyzesentiment/v1?text=I+like+fish+for+dinner&apikey=0351acf2-ce16-4076-8ac2-3442556

The below code shows a standard method for requesting a URL and this is used for all the API endpoints.


private Response executeGet(String url) throws IOException {
HttpURLConnection httpUrlConnection = null;
InputStream inputStream;

try {
httpUrlConnection = (HttpURLConnection) new URL(url).openConnection();
httpUrlConnection.connect();

inputStream = httpUrlConnection.getResponseCode() != HTTP_OK ? httpUrlConnection.getErrorStream()
: httpUrlConnection.getInputStream();

return new Response(httpUrlConnection.getResponseCode(), fromInputStream(inputStream));
} finally {
closeQuietly(httpUrlConnection);
}
}


The response that we receive is JSON formatted. To do anything useful with this we need to de-serialize it into a Java object. Jackson is a great library for easily doing this — it gets out of your way and doesn’t require mapping files/annotations.

We know what to expect from the response (from the docs) and here is the response for our ‘I like fish for dinner’ example:

{

"positive": [

{

"sentiment": "like",

"topic": "fish for dinner",

"score": 0.7176687736973063,

"original_text": "I like fish for dinner",

"original_length": 22,

"normalized_text": "I like fish for dinner",

"normalized_length": 22

}

],

"negative": [],

"aggregate": {

"sentiment": "positive",

"score": 0.7176687736973063

}

}


Pretty good analysis!

If we provide a java object that closely matches the structure of the JSON then Jackson can do the rest and we don’t have to write much code for the conversion. Here it is:

public class SentimentAnalysisResponse {

public SentimentAnalysisResponse(){}

private List&lt;SentimentDetails&gt; negative;
private List&lt;SentimentDetails&gt; positive;
private SentimentAggregate aggregate;

public SentimentAggregate getAggregate() {
return aggregate;
}

public void setAggregate(SentimentAggregate aggregate) {
this.aggregate = aggregate;
}

public List&lt;SentimentDetails&gt; getNegative() {
return negative;
}

public void setNegative(List&lt;SentimentDetails&gt; negative) {
this.negative = negative;
}

public List&lt;SentimentDetails&gt; getPositive() {
return positive;
}

public void setPositive(List&lt;SentimentDetails&gt; positive) {
this.positive = positive;
}

public static class SentimentAggregate {
private String sentiment;
private Double score;

public String getSentiment() {
return sentiment;
}

public void setSentiment(String sentiment) {
this.sentiment = sentiment;
}

public Double getScore() {
return score;
}

public void setScore(Double score) {
this.score = score;
}
}

public static class SentimentDetails {
public SentimentDetails(){}

public String getSentiment() {
return sentiment;
}

public void setSentiment(String sentiment) {
this.sentiment = sentiment;
}

private String sentiment;

private String topic;
private Double score;
private String normalized_text;
private String original_text;
private Integer original_length;
private Integer normalized_length;

public String getTopic() {
}

public void setTopic(String topic) {
this.topic = topic;
}

public Double getScore() {
return score;
}

public void setScore(Double score) {
this.score = score;
}

public String getNormalized_text() {
return normalized_text;
}

public void setNormalized_text(String normalized_text) {
this.normalized_text = normalized_text;
}

public Integer getOriginal_length() {
return original_length;
}

public void setOriginal_length(Integer original_length) {
this.original_length = original_length;
}

public Integer getNormalized_length() {
return normalized_length;
}

public void setNormalized_length(Integer normalized_length) {
this.normalized_length = normalized_length;
}

public String getOriginal_text() {
return original_text;
}

public void setOriginal_text(String original_text) {
this.original_text = original_text;
}
}
}


Note that there is a default constructor (no args) for each object. This is required by Jackson due to the way it constructs objects. Our code to translate the JSON response to this object is

ObjectMapper mapper = new ObjectMapper();
JsonParser parse = new JsonFactory().createParser(response);


It should be clear what we are doing here. Putting it all together looks like this:

public SentimentAnalysisResponse analyseSentimentUsingText(String text, String language) {
SentimentAnalysisRequest.SentimentLanguage lang;
try{
lang = SentimentAnalysisRequest.SentimentLanguage.valueOf(language.toUpperCase());
} catch (IllegalArgumentException e){
lang = null;
}
SentimentAnalysisRequest req = new SentimentAnalysisRequest(
ProcessRequestType.TEXT, text, lang);
String urlComponent = req.toUrlComponent();
String response = null;
try {
response = executeGet(BASE_URL + "sync" + "/" + urlComponent + getApiUrlComponent()).content;
ObjectMapper mapper = new ObjectMapper();
JsonParser parse = new JsonFactory().createParser(response);
return resp;
} catch (JsonMappingException e){
logger.error("error encountered for response " + response + " with text " + text);
return null;
} catch (IOException e) {
logger.error("Exception encountered when trying to analyse sentiment",e);
return null;
}
}


This can be repeated for all of the API endpoints in a similar manner. Once I’ve written the calls for a few more endpoints I’ll publish it as a library.

I’m not going to concentrate on how to get the statuses from Twitter as that is out of scope but you can look at the code for this project <here>. We have a stream of tweets coming in to which we attach extra information from the IDOL API:

• The language of the tweet
• The sentiment analysis
• The tweet text with key terms from the sentiment analysis highlighted

These then get sent to a store where they are retrieved when requested from the front-end web page.

The web page is little more than a list of the enriched tweets. It uses JSP to obtain the list then Bootstrap to organise all the elements of the page.

A small bit of JavaScript allows us to see more details about the tweet and we are done! Here is a little GIF of the final version.

## Conclusion

The HP IDOL API provides powerful analysis tools in an easy to use format. It can be combined with all sorts of unstructured data sources to create a useful tool in a small amount of time.

# Drawing a proper square using the Projection Matrix

To understand why the square in the post before was rendered as a rectangle we need to understand how OpenGL transforms the 3D vertices we specify into 2D window coordinates. This is achieved through a four stage process as shown below.

The section that we need to closely look at is the projection matrix. This is a 4*4 matrix mapping vertices from $\mathbb{R}^3 \rightarrow \mathbb{R}^3$. Any vertices that are mapped to the cube bounded by

$-1<=x<=1, -1<=y<=1, -1<=z<=1$

are eventually drawn on screen.

So, we are interested in which section of $\mathbb{R}^3$ is mapped to the above cube, i.e. we need to know the preimage of the cube so that we know where to place our vertices. We usually use two types of projection matrix which OpenGL provides by default.

Orthographic projection

An orthographic projection maps a rectangular box to the cube defined above. One property of the orthographic projection is that shapes closer or further away from the ‘camera’ are the drawn as the same size i.e. there is no perspective correlation. I say ‘camera’ with some reserve as there isn’t really a camera in OpenGL. The only sense in which we have a camera is that if we have two opaque objects which are exactly the same except one is at z=-1 and the other is at z=-2, then only the first one will be visible because OpenGL draws the first on top of the second. In this sense the ‘camera’ is pointing down the negative z-axis but still has no real position. We set an orthographic projection using the following command:

void glOrtho(float left,
float right,
float bottom,
float top,
float nearZVal,
float farZVal
);


This type of projection matrix is used for CAD and architectural systems where is is important to be able to compare sizes of objects and in 2D games.
Perspective projection
This performs a mapping from a frustrum to the cube as defined above. This type of projection matrix is used to model the real world as we see it. It has the property that 2 identical objects one close to the origin and one far away appear on the screen as different sizes, the closer the object is, the bigger it seems.

Later in the process this cube is mapped to the 2D window coordinates, so the projection matrix defines what area of the 3D world is eventually visible on screen and more specifically, which vertices that we specify are visible. Our visible area looks like this in the 3D world

Quite simply, any vertex that is in this area after the modelview transformation is drawn on screen. The focus of the frustum (the point where the edges would meet if extended) is always (0,0,0) by definition. The analogy with real-world viewing angles is immediately apparent. If we imagine that our viewer is at (0,0,0) and looking in the direction of the frustum, then we can think of the nearest boundary of the frustum as the computer screen and the far boundary as the point beyond nothing which useful can be seen.

We define a frustum with the following command:

void glFrustum(float left,
float right,
float bottom,
float top,
float zNear,
float zFar);


where left, right, bottom etc are as definied here. The derivation of the matrix which performs the mapping is derived here.

## Some other Points

Both glOrtho and glFrustum can only be called once OpenGL is put in specific mode. It must be told to interpret the following commands in terms of a projective matrix. This command is

glMatrixMode(GL_PROJECTION);


After this command is called, it is good practice to call

glLoadIdentity();


in case there are any commands preloaded into the projection matrix. You should also note that OpenGL takes commands after glMatrixMode accumulatively i.e. if you give lots of commands, you end up with a matrix which is a product of the previous commands.

## Why is our square a rectangle?

We return to the original question now. Why is our square drawing incorrectly? It turns out that this problem is due to the size of our window. In our program, we didn’t specify a projection matrix so the default one is used. This is identity matrix, and is equivalent to calling

glOrtho(1,-1,-1,1,-1,1);


So, our square is within the viewing volume and is drawn. It looks exactly as it did before the transformation. Now the square is mapped to the screen, in particular our window. However, our window isn’t square in shape, it is a rectangle. So, our square is squashed vertically to make it fit. This is bad.

How do we solve this? Well, we could change the shape of the window, but that isn’t getting to the root of the problem. The answer is to specify a projection transformation that maps our square to a rectangle stretched in the x-direction by the right amount so that when it is mapped to the screen it appears correctly. The right amount is clearly the aspect ratio of the screen. We will call

glOrtho(aspectRatio,- aspectRatio, -1.0d, 1.0d, 1.0d, -1.0d);


The source code for the fixed square is here.

# Draw a square

Let’s draw a shape on our newly created Display. Most of the basic OpenGL commands in LWJGL are contained in the GL11 class. They are all static methods due to the procedural nature of the specification. To create points/lines/shapes we specify a number of vertices wrapped in a context.

Vertices

…or points. Whatever you want to call them. These are simply a location in 3-dimensional space specified by 3 co-ordinates (the familiar x, y and z). Actually OpenGL handles vertices using 4 co-ordinates per point, but more about that later…

A vertex can be recorded using floats or doubles as the coordinates:

glVertex3f(float x, float y, float z)
glVertex3d(double x, double y, double z)


Contexts

There are many different contexts to choose from (GL_POINTS, GL_LINES, GL_LINE_LOOP to name a few). The context describes how to associate the vertices that are specified. For example, GL_LINES connects the points with lines, GL_TRIANGLE connects them in triangles etc. The context is specified like so:

glBegin(GL_LINE_LOOP);
glVertex3f(-0.5f, -0.5f, 0f);
glVertex3f(-0.5f, 0.5f, 0f);
glVertex3f(0.5f, 0.5f, 0f);
glVertex3f(0.5f, -0.5f, 0f);
glEnd();


The specific rules about each context is contained in any good OpenGL reference. A vertex call is meaningless without a context — the vertex cannot be stored and used later so it must always appear along with the information about how it is to be used.

We now have the prerequisite tools to create a square on screen. The code for this example is hosted here.

Simple! You probably have a lot of questions now:

1) Why has OpenGL rendered a rectangle instead of a square?
2) How did I know which coordinates to use so that the rectangle appears on the screen?

These questions will be answered in my next post.