Content Management with AWS Lambda

Crazy? Probably, but it won’t be the first time that has been suggested.

First,let me offer some background. Recently I have had the opportunity to experience what content management systems do and how they utilized. Products like Documentum and Alfresco are meant for general use. By their nature these systems are less efficient and more complex than something built for a specific purpose. For some agencies this works out well. They don’t have IT organizations that could develop it in house. But for many an ECM is their system of record(SOR) and this is a good solution. When  the system of record lies outside the ECM there is less to be gained. There maybe an existing workflow that doesn’t match the general flow that the ECM defines. I felt there had to be a simpler way. How difficult could it be?  I am not suggesting to build for the general use, instead build only what is needed.

Note: The model I used is an EHR, Electronic Health Record.

The core

Basically there are only three pieces. A database to store metadata,a file system to store the content and processes for creating, retrieving, updating and deleting (CRUD) the information.

Other stuff

Thumbnails: For an EHR system this would not likely be needed. There are not a great variety of document types where a “preview” would be useful. This could be a requirement in other applications.

Transformation: EHR systems use a small number of standard file formats. Its required to show the HL7 data in is native format. converting the data to an image or pdf is not done. But this could be a requirement in other applications.

Version: This could be useful in any content store.

Starting out old school

My first thought was to go with what I know; Java,Springboot and JPA. Start with a database. Since this will be on AWS, MariaDB is a good place to start. Its MySQL compatible and free to start with. For an EHR system the content is the patient data,stored in HL7 format. Since the system of record is the content, the database doesn’t have to be very complex. Two or three tables is more than enough.



Create an app

Using Eclipse I created a new Springboot, JPA application(included Hibernate). Eclipse  also generated the entities from the database and some of the support code. A few hours later I had a Springboot (CRUD ) app that could read and write to a database. S3 would be the choice for the content since the app would be running on AWS. Fortunately AWS offers nice Java support for S3.

With this done I had basic content management system. AWS suggests Elastic Beanstalk for deploying applications. Its not the simplest thing but it does work. My rest service was very simple, a JSON file for metadata and the HL7(xml) file for content.

This was not a ready for production system but with AWS it was pretty quick and simple to get something working.


Something didn’t feel right. This is the same process/framework that everyone one is doing,its not new. Since I am studying the AWS exams shouldn’t I consider this from the AWS point of view?


If you want to know what AWS thinks is the future, think Lamdba. “Run code without thinking about servers.Pay for only the compute time you consume( Lambda is what powers LEX and Alexia. I wont repeat what the AWS says about Lambda but AWS is putting a lot of effort into this.

Building a content management system based around Lambda

I still need a database(or do I?) and a file system to store the content. I already have the database and S3 from the Java project, no need to start over. What is missing is the CRUD app that I built with Java.

Since they are going to sit idle until needed Lambda functions should be light weight, quick to start up. AWS allows Lambda functions to be written in Javascript(node.js), Python, Java or C#. Java and C# seem too heavy.  Spring and Hibernate don’t fit into this picture.  I felt this left two options, Javascript or Python. Both have their advantages (I use both). I went with Python. I learned later that Javascript is the only choice for some third party tools. As in the Java application I have chosen to write the content and the meta data to S3. The meta data is written to the database as well. S3 has an option to add “metadata” to the object. By writing the data as file I could leverage Solr to search content and meta data! In theory this eliminate the need for a database.

AWS has support and examples for creating Lambda functions in Python. “pymysql” and “boto3” are Python libraries for MySQL and S3. Both are available and do not require the developer add them.

Python is deployed to Lambda as a deployment package. This is simply a zip file with your Python code and any external libraries not supported already with AWS. The trick to this is getting the python file and Lambda handler correct. I used and contentHealthLambdaHandler as the handler function. Below is how they are used in the Lambda configuration.

The code

Note: The code I am including in basic. Almost all of the error handling has been ignored.

Standard Python imports

import pymysql
import datetime
import boto3
from cStringIO import StringIO

The lambda handler function definition

def contentHealthLambdaHandler(event, context):

Lambda passes parameters as a Python Dictionary. I am passing in two parameters, metaData and content.

content = event['content']
metaData = event['metaData']

The meta data is patient  information.(patientNumber,patientFirstName,patientLastName,…)in a JSON format. I have left out the parsing of this as its trivial.

Make sure the parameters are included:

if 'metaData' in event and 'content' in event:
    content = event['content']
    metaData = event['metaData']
    return "error"

Create an S3 resource. This is used to write or read to S3.

s3 = boto3.resource('s3')

Store the data to S3. In this case the bucket is fixed but it could be passed as a parameter.

target_bucket = "com.contentstore"

Create a file name.

target_file = metaData+"_md"+str(createDateTime)

# it reads like a file handle

temp_handle = StringIO(metaData)

Create or get the  bucket

bucket = s3.Bucket(target_bucket)

Write to the bucket

result = bucket.put_object( Key=target_file,

That is all there is, six lines of code(error handling is not included). Six more lines are required for storing the content to S3. I did not test with a very large file and there maybe more effort required in these cases. I have not noticed anyone talking about additional issues.

Write to the database

The connect string is familiar to anyone who has done Java database coding before.

conn = pymysql.connect(host='myurl',user='ec2user',passwd='xxxxyyy',db='mydb')

My schema contains two tables, a base and a patient document. Since the patient document has a foreign key to the base document I have to store them separately. There are Python ORM’s that would probably handle this but its so simple that basic SQL will suffice. All of the database code is wrapped in a try-except clause. If any of the execute’s fail the commit will never happen.

Store the base document

cursor = conn.cursor()
baseDocumentInsert = "INSERT INTO documentbase(createDateTime,description,contentURL) VALUES(%s,%s,%s)"
args = (createDateTime, "healthRecord", result.bucket_name +"/"+ result.key)
cursor.execute(baseDocumentInsert, args)

Store the patient document. Note that “cursor.lastrowid” is the id to the base document and will be the patient document foreign key.

patientDocumentInsert = "INSERT INTO patientdocument(patientNumber,patientFirstName,patientLastName,docBase) VALUES(%s,%s,%s,%s)"
args = (metaData, "Frank","smith",cursor.lastrowid)
cursor.execute(patientDocumentInsert, args)

If nothing has failed, commit the changes.


This is the complete function. It will store content and metadata to S3 and metadata to the database.

import pymysql
import pymysql
import datetime
import boto3
from cStringIO import StringIO
def contentHealthLambdaHandler(event, context):
    if 'metaData' in event and 'content' in event:    
       print  " metaData: " +event['metaData']    
       print  " content: " +event['content']    
       content = event['content']    
       metaData = event['metaData']        
      return "error"   
createDateTime =    
s3 = boto3.resource('s3') 
# store the meta data    
contents = metaData    
target_bucket = "com.contentstore"    
target_file = metaData+"_md"+str(createDateTime)    
fake_handle = StringIO(contents)         
bucket = s3.Bucket(target_bucket)    

# like a file handle    
result = bucket.put_object( Key=target_file,    
#store the content data    
contents = content    
target_bucket = "com.contentstore"    
target_file = metaData+str(createDateTime)    
fake_handle = StringIO(contents)        
bucket = s3.Bucket(target_bucket)    
result = bucket.put_object( Key=target_file,    
conn  = pymysql.connect(host='myurl',user='ec2user',passwd='xxxxx',db='mydb')  
    cursor = conn.cursor()        
    baseDocumentInsert = "INSERT INTO documentbase(createDateTime,description,contentURL) VALUES(%s,%s,%s)"        
    args = (createDateTime, "healthRecord", result.bucket_name +"/"+ result.key)                
    cursor.execute(baseDocumentInsert, args)        
    print "baseDocumentInsert id "+ str(cursor.lastrowid)            
    patientDocumentInsert = "INSERT INTO patientdocument(patientNumber,patientFirstName,patientLastName,docBase) VALUES(%s,%s,%s,%s)"        
    args = (metaData, "Frank","smith",cursor.lastrowid)        
    cursor.execute(patientDocumentInsert, args)                
     cursor.execute("SELECT * FROM patientdocument")        
     # print all the first cell of all the rows        
     for row in cursor.fetchall():            
       print row                     
   except Exception as e:        
  return "got it"


The first level of testing is done from the Lambda AWS console


Cloudwatch logs all of the output so you can easily see what happened.

Rest service

In order to use the Lambda function it needs to be exposed as a REST endpoint. This is done using the API gateway. The process is well documented so I wont go into it. The API gateway can be done separately as I did or at the time the lambda function is created.

This link walks you through the process: Build an API to Expose a Lambda Function

Testing the new rest service

The simplest way to do this is using PostMan. When you create the REST endpoint the API gateway console will supply an Invoke URL. This can  be used in Postman to test the new service.

The other way to test is using a Python client. The central part of the client is:'', 
data) where 'data' is a JSON string.


     The content is is HL7 format. The metadata is simply a patient id randomly generated. The code below creates twenty post requests to upload content and metdata to the

Lambda client process.

starttime =
for i in range(1,20):
    id = str(randint(100000, 400000)) # generating a random patient ID
    data = json.dumps({'metaData': id, 'content':'data removed for simplicity  '})
    r = req ='', data)
endtime =
print str(endtime-starttime)

The result is less than 1 sec per post call. The data is small( 3K), but this was done from my home laptop into AWS. I would expect better rates in a “real” environment.

The content data in S3

AWS Maria DB

One issue with Lambda is that is is slower to respond the first time since it has to spin up. I am not clear on the time window where the function is active vs. idle. Its something I need to look into.

Other stuff

As mentioned earlier there are functions needed beyond storing data.

  • Read or search for meta data. def contentHealthLambdaHandlerRead(event, context):
  • def contentHealthLambdaHandlerReadContent(event, context):


This involves converting various document formats into one standard format, likely PDF. Other ECM’s use third party tools to do this work. Using Lambda would not prevent using a similar third party tool but I would prefer that conversion be done beforehand, it the code calling the REST service. Its not an integral part of content store. Another way to achieve this is to use the Lambda trigger to start the transformation.  ImageMagick or LibreOffice can be used to convert the files as they are written to S3.


This is also where third party tools come into play. Lambda has a great way to handle this, triggers. A function is setup to trigger on a file added to S3. The function handles the process of creating the image or images. The examples of this use something like ImageMagick. The only issue I found is that it is currently only usable in Lambda with Javascript. Its not a big deal but I’d have to part from Python for a while.


S3 can version documents automatically. AWS Lifecycle can use versions to move data to other storage options such as glacier. “boto3” supports S3 versions so its possible to filter and return information based on versions.


    Lamdba is becoming AWS’s path forward. They continue to improve and add features.

This effort was for educational purposes. But it shows how the tools we have today  can make building great software so much simpler.

Posted in Uncategorized | Leave a comment

An experiment with Lucene and Shakespeare

There is a lot of talk about Solr these days. The engine that drives Solr is Lucene which I have used Lucene in  Neo4j, but never directly. Maybe its time to see how it works.


Lucene is full-text search library originally written in Java. A key feature is it use of an inverted-index. Instead of storing pages it stores keyword indexes to pages. This fact would dictate how to proceed. Tools like R and Python NLTK are used in text mining where the interest is with text analysis, how are words are related to each other. Lucene tends to focus on search at the page level. What is the frequency of text within a set of pages, similarity between pages. One reason why Lucene is so popular with web searches.
In order to make the best use of Lucene I’d need data that is split into pages and not one big document. A bit of searching led me to Shakespeare, he has a good deal of text(even if he didn’t write it all). A  lot of the work done with this writing is text mining/analysis and as such most use a single document. MIT is a good source for this. But after looking at it I felt it was not going to fit my needs. The single document has all of his writings but there wasn’t an obvious way to delineate it into separate texts. I found one site that had all on the works in separate html files.

Process the files

I started with Python to get the files and remove all of the html and punctuation,leaving plain text. I hoped to continue with Python but…. Lucene has a great Python library if you are on a Mac or Linux system. It was tempting to switch to a Raspberry PI since its Linux but..I went with Java on Windows. Just too lazy I guess. In the end I had 196 separate text files( there are soo many sonnets!).

Getting started with Lucene

The first thing I learned was that Lucene changes quite a bit between versions. I was using 6.6 and the book I had was 3.0. When searching for examples make sure you note the version.

There are two stages with Lucene, indexing and searching. There are a lot of ‘switches’ and ‘levers’ that can be  applied depending on the goal. I wanted to index all of the documents , frequencies and position.  You need to define a field to be indexed. This can be specific like ‘fileName’ or ‘Title’. Or it can be ‘contents’ which in this case is the entire file. Each Field needs a FieldType that defines how you want Lucene to handle it. One of the these is Tokenized(). When set to true Lucene will break up a string into tokens . Otherwise it treats the string as a single entity. Searching still works on non-tokenized data but the details are only at the string level. I wanted frequency and position  calculated which meant I needed TermVectors. They record frequency, position, offset. Position is where the term lies in the document. Offset is the start and end position of the term.In this sentence: “the quick brown fox jumps over the lazy dog”, the term vectors are:

Term   Freq Position  Offset
brown     1             2             [10,15]
dog          1            8             [40,43]
fox           1            3             [16,19]
jumps     1            4             [20,25]

As one would expect this takes more time to generate and also more storage space. Surprisingly, processing time was not much different
Results for Lucene storage based on settings.

Index only                          Index with TermVectors

4k/1819 milliseconds             9.5k/1982 milliseconds

The code to index the files is pretty simple.

1. Create a Directory. indexDir is where we want the output to go

Directory dir =;

2.  Create an IndexWriter

writer = new IndexWriter()

3. Process each file

Document document = new Document(); 
FieldType analyzed = new FieldType(); analyzed.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); 
analyzed.setStoreTermVectors( true ); 
analyzed.setStoreTermVectorOffsets( true );
analyzed.setStoreTermVectorPayloads( true ); 
analyzed.setStoreTermVectorPositions( true ); 
analyzed.setTokenized( true );   
String text = new String(Files.readAllBytes(file.toPath()), StandardCharsets.US_ASCII); // index file contents Field 
contentField = new Field(LuceneConstants.CONTENTS, text, analyzed);
 // index file name 
Field fileNameField = new Field(LuceneConstants.FILE_NAME, file.getName(), analyzed);
 // index file path 
Field filePathField = new Field(LuceneConstants.FILE_PATH, file.getCanonicalPath(), analyzed); document.add(contentField); 


This part can be simple or very complex depending on the goal. The first three questions are simple. The fourth, similarity, is something I am still working on. This is primarily because I am not sure how or what I want to measure.


1. How many documents contain a term.

2. In what documents is a term found  3. How frequent is a term  4. How similar are documents.

Question 1.    This is code that most searches will start with. In this case the term is “Midsummer”

int TOP_N_HITS = 10; 
String q =  "Midsummer"; 
Directory dir =;  
IndexReader ir =;    
IndexSearcher searcher = new IndexSearcher(ir);       
QueryParser parser = new QueryParser( "contents", new StandardAnalyzer( ));      
Query query = parser.parse(q);
TopDocs hits =, TOP_N_HITS);

The result is 4 documents.

The next step is find the files where “Midsummer” was found.

  for(ScoreDoc scoreDoc : hits.scoreDocs) {      
     Document doc = searcher.doc(scoreDoc.doc);      
     IndexableField field = doc.getField(LuceneConstants.FILE_NAME);            

This will tell us that the term was found in these files.

  • Midsummer Night’s Dream Play
  • As You Like It Play
  • Twelfth Night Play
  • Henry IV, part 1 Play


One thing we don’t know is how do these compare. Was the term found once or many times? If its only once maybe we don’t care. Clearly the play Midsummer Night’s Dream should rank higher than the rest. In order to find this out  Lucene has Highlighter feature which will return fragments of the text surrounding the term.

In this first case we want to know where the term occurs more than once. Using the Highlighter to return surrounding text can help evaluate if this is relevant. Only in the first play does the term “Midsummer”  occur more then once.

  • The file name(in the text file) :Midsummer Nights Dream Play.txt
  • The title :  Midsummer Nights Dream Play (Left over text from the original html   Shakespeare homepage )
  • Midsummer Nights Dream
    • Entire play      ACT I       SCENE I Athens

The last fragment shows text near the term is the start of ACT I. Maybe Midsummer wasn’t a great term to start with?

Highlighter code. This code is within the loop of all documents. “scoreDoc.doc” is the document id for the current document in the loop.

 // get the field named "contents"     
String text = doc.get(LuceneConstants.CONTENTS); 
Analyzer analyzer = new StandardAnalyzer();     
SimpleHTMLFormatter htmlFormatter = new SimpleHTMLFormatter();     
Highlighter highlighter = new Highlighter(htmlFormatter,
   new QueryScorer(query));            
TokenStream tokenStream = TokenSources.getAnyTokenStream(searcher.getIndexReader(), 
   scoreDoc.doc, LuceneConstants.CONTENTS, analyzer);       
TextFragment[] frag = highlighter.getBestTextFragments(tokenStream, text, false, 2);

Loop through each TextFragment. If the score is not zero the print out the fragment of text.

for(TextFragment tf : fraq){        
 if ((tf!= null) && (tf.getScore() > 0)) {             
    System.out.println(" fragment score  "+tf.getScore()+"  "+(tf.toString()));  

The second and third questions are answered using the term vectors. Each TermVector will indicate the number of documents the term was found in and the number of times the term was found in a document. Instead of Midsummer, which was disappointing, I tried searching for  something that should give more interesting results. Either romeo or juliet (heard of them?) should work.

The term ‘romeo’ was only found in one document,‘Romeo and Juliet’ (surprise!). ‘juliet‘ was found in two,’Romeo and Juliet‘  and ‘Measure for Measure

In ‘Romeo and Juliet’ , the term ‘romeo’ occured  295 times.’juliet’ occured  178 times.

In ‘Measure for Measure’, the term ‘juliet’ occured  16 times.

When processing the data I did not do any ‘stemming’. This process reduces words to their root, juliets or julietta become juliet. This is a common practice and I could go back and clean up the data.

The fourth question,similarity is difficult because I am not sure how I would consider the various documents to be similar. The basic similarity in Lucene is Cosine similarity. The diagram below shows two vectors, one for each document.

The terms in question are transpacny and supply. The closer the two vectors are to each other the more ‘similar’ the documents could be. At least for these two terms. In the case of Shakespeare using romeo and juliet would not tell us much since one term only appears in one file. The other side of this might be medical or insurance documents. I suspect that these contain a lot of the same wording and it wouldn’t be hard to find documents that similar. I’ll have to experiment with different terms and see what, if anything, falls out.


Indexing 195 files took 1982 milliseconds

Midsummer Night’s Dream_ Entire Play.txt: fragment score  1.0

A <B>Midsummer</B> Nights Dream:  fragment score  1.0

Files return

  • As You Like It_ Entire Play.txt
  • Twelfth Night_ Entire Play.txt
  • Henry IV, part 1_ Entire Play.txt


Indexing 195 files took 1819 milliseconds

 Terms tv = ir.getTermVector( scoreDoc.doc,LuceneConstants.CONTENTS);
 if( tv != null){  
     TermsEnum terms = tv.iterator();  
     PostingsEnum p = null;  
     while( != null ) {          
       BytesRef bytesRef =;          
       while(bytesRef  != null){              
         System.out.println("BytesRef: " + bytesRef.utf8ToString());
              System.out.println("docFreq: " + terms.docFreq());  
            System.out.println("totalTermFreq: " + terms.totalTermFreq());              bytesRef =;          }  }  }      totalTermFreq: 1BytesRef: affirmativesdocFreq: 1totalTermFreq: 1BytesRef: afraiddocFreq: 1totalTermFreq: 4BytesRef: afterdocFreq: 1totalTermFreq: 14BytesRef: againdocFreq: 1totalTermFreq: 19BytesRef: againstdocFreq: 1totalTermFreq: 8BytesRef: agedocFreq: 1totalTermFreq: 2

 search for text: romeo 
   BytesRef: romeo (text found in result, just for verification)
   docFreq: 1
   totalTermFreq: 295
 search for text: juliet
    BytesRef: juliet (text found in result, just for verification)
    docFreq: 2
    totalTermFreq: 178


Lucene offers much more than I have seen so far. But its searching capabilities are interesting.  Content management systems store files on a file system and meta data(information about the content) in a database. For AWS the content could be on S3 and the meta data in Aurora. Lucene could be used to search content  and meta data if both were stored on S3.  This seems like a much simpler design…


Posted in Uncategorized | Leave a comment

Deep Learning- self driving car (hobby)

Really it is a Tesla.. Four electric motors, batteries, sensors, and two cameras.

Okay…kinda like a Tesla.

I have been fasinated with neural networks going back to the early 90’s when I was doing work on forms recognition and hand writing analysis. The idea lost appeal for a long time but has had a resurgence as “deep learning” is being used for processing large amounts of data. Self driving cars is one area where they are making gains. Recently I watched the series by Dr. Lex Fridman(MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars). Besides covering a lot about neural networks he talked about how Tesla instruments their cars to “learn”.

Is the car really learning to drive? Not exactly. By driving the car around it gathers data that can be used later on. The data includes images, sound,temperature, GPS and driver reactions. All of this data is feed into a neural network such as tensorflow. The car knows only what it has “seen” before. The system is memorizing every possible situation. Anything that occurs out of context from what it knows could cause trouble. But the more data that is gathered the less likely it will be that something unforeseen will occur.  Of course AI is always improving and at some point will be able to make better choices – zero shot learning.

I wanted a way to experiment with this myself. Buying a Tesla is out of the question. I could add sensors to my car but that is just asking for distracted driving. Also, I work remote and don’t drive alot. The next best thing would be create a small ‘car’ that I could use to gather data.

This car is not going to be on a road. Which means  it wont have things like lane lines to guide it. I might build a track where it could be driven. Or just wander around the house and scare the dog and cat..

The picture above shows a small RC car. It has a motor for each wheel. Steering is similar to tank driving. Turns are done by slowing down one set of wheels while speeding up another. Sharper turns can be done by reversing the wheel instead of just slowing them down. Its not very smooth but it gets the job done.

The car will is first configured for data gathering. I am using an Arduino with blue tooth for communications.  I wrote a simple app for my Android tablet. There are six ultra sonic sensors for determining distance. I have two cameras(only one in the picture) mounted on the front. These will record stereo images which will help determine depth. The first thing  I learned is that the ultra sonic sensors will only see a small portion of what is in front of them. The first trial run they completely missed the table or chair legs. The sensors need to sweep the area in front so as to create a point cloud. For this I am adding a pan and tilt control to the sensor mount. Two servos will move the sensor array. Data is being recorded to a flash drive.


I am recording the data at one second intervals. The car doesn’t move very fast so this rate should be sufficient. The value at each sensor, two camera images and the drive command are recorded.

Currently I am in practice mode, refining the app to better control the  car. I found some sensors for the wheels to detect the speed of rotation. I think I’d need to upgrade the Arduino to add any more devices so I’ll leave them off for the time being.


More later…

Posted in deep learning | Leave a comment

Data Centric programming – End of the Cloud


Interesting. The idea of data centric computing is something I have been thinking about. The rise of machine learning plays a big part of this…

The End of Cloud Computing

Posted in Uncategorized | Leave a comment

Unity – pong game.. why not


The original pong game was not much compared to what would come later. For many it was amazing that something like was available for use in the home.  I have seen others make pong games in Unity and thought it might be fun to try. Pictures of the console show controls for two players. For this project I’ll make the second player the computer. Yeah it will hard to bet but….

Using Unity 5.5 start out with a basic camera.



Using a graphic tool I made a paddle and a ball(png format). Create a folder named Sprites and drop in the two images. Create a empty game object named ‘player’


Drag the paddle sprite onto the player game object and notice the Sprite Renderer show up as a component of the player.  Select the player object and then Select Component->Physics->RigidBody from the menu. The paddle will need this in order to bounce the ball.

unity pong 3.png

Create  a new folder named Physics. Select Asset->Create->PhysicsMaterial. Name it bounce. When applied to the paddle will cause the ball to bounce back.


In order for the paddle to react to the ball hitting it we need to add a collider component. A BoxCollider will do. Set the material of the collider to the bounce material.


Create a new game object called Ball and add in the ball sprite. Add a RigidBody and collider as well.

unity pong 6.png

So far there is one paddle and a ball. Not so good?

There needs to be some code in order to make this work. Create a new folder named Scripts and add a new c# file named paddle.cs. Below is what the code should look like( or close).

The Update function is part of the core Unity MonoBehavoir class. It is called once per frame. The variable ‘gameObject’ refers to the object to which this script is attached.

Input.GetAxis(“Vertical”) will return  -1 or 1 depending if the down or up arrow key is pressed.  This value times the speed will be used to increment the position of the game object.

playerPosition = new Vector2(-20,Mathf.Clamp(yPosition,-13,13));

This line creates a new 2d Vector with a fixed X location  of -20(where I placed the paddle on the screen), and a new value of Y based on the new yPosition. Mathf.Clamp() restricts the y value to between -13 and 13. These values were determined by experimentation.

The last line transforms the object to a new position. Since the x value is always -20 the paddle will only move up or down.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class paddle : MonoBehaviour {
 public float speed=3;
 public float yPosition;
 public Vector2 playerPosition;
 // Update is called once per frame
 void Update () {
    playerPosition=new Vector2(-20,Mathf.Clamp(yPosition,-13,13));
    gameObject.transform.position =playerPosition;





Posted in Uncategorized | Leave a comment

Multimedia Mobile application using Low Power Bluetooth(BLE)

… in a museum. You walk by an painting suddenly your phone becomes the voice of the artist and begins to speak to you about the piece…. Bringing art to the guest.

In 2008 I developed an application that used RFID to trigger events on a mobile device(PDA).  The main purpose was to be an Electronic Docent, a museum guide. Exhibit information delivered directly to the guest.

Unfortunately RFID never became a consumer  friendly technology. Fast forward to 2016, smart phones are prevalent and low power bluetooth(ble) devices are becoming ever more popular. In January myself and  two others began development on a new version of the application.

The PDA has been replace by smart phones and tablets. Both iOS and Android hold major positions in this area. Both have support for standard Bluetooth as well as BLE.

How it works

The application running on the device is designed to look for BLE tags.  When one is located a request is made to a server to search the database for the tag id. If the id is located information about the media is returned and the user can select if they want to view the media or not.


The tags and media have to be associated.This is done by personnel managing the location. They understand both the content and how they would like it displayed to the visitor.




One of the biggest decisions was how to develop the mobile portion of the application.

  1. Native, iOS and Android
  2. Cross platform framework: Xarmin,QT
  3. Javascript/HMTL5 framework: Apache Cordova ( formerly  Phonegap), Ionic.


Until a few years ago mobile applications were required to be developed in Java  or Obj C. Apple refused to accept applications cross compiled or interpreted into Obj C. The draw back is that an application had to be developed twice. Maintenance was much harder since it required twice the effort in coding and QA.

On the other side,native applications had the ability to interact with the devices hardware, sound, touch, gps, and accelerometer.

Cross platform framework

Frameworks such as Xarmin and QT give the developer to write one application and deploy it to multiple mobile platforms.

Xarmin: Based around C# and created by team that created Mono. Xarmin takes C# code and creates native code for iOS or Android. Microsoft now owns Xarmin and has integrated it into its Visual Studio IDE.

QT: This has long been a popular framework for developing applications for Windows, OSX and Linux. When mobile support was added there became license issues. Also QT has less of a native look and feel.

Javascript/HMTL5 framework: Tools such as Ionic use the Angular.js framework and Cordova libraries to create cross platform applications. The key to the success has been the Cordova(Phonegap) libraries. These provide access to the device hardware which lets the application behave more like native code.

We chose Ionic. There were too many issues with either Xarmin or QT. Developing two separate native applications was out of the question.

Serving up data

Once the mobile application find a BLE tag it needs to get the information associated with the tag. This means an application server. This was a simple choice, Java, Hibernate ,MySQL and Tomcat. This combination is proven, solid and will work well on something like AWS. One advantage to MySQL is that AWS’s Aruora database is MySQL  compatible and an easy replacement if very high performance is required.

Server side

Using Java and Hibernate makes the server work pretty straight forward. The code is built on layers. Entities,DAO, Service, Controller.


Each entity represents a table in the database.

  • ExhibitTag
    •   This represents a single ble tag
  • ExhibitTagMedia
    • This represents the media associated with tag. A tag could have more than one media component.
  • Location
    • This represents the location of the tag
  • Organization
    •  This  represents the site or organization managing the tags.
package com.tundra.entity;
import java.util.Date;
import java.util.Set;
import javax.persistence.Basic;
import javax.persistence.CascadeType;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.FetchType;
import javax.persistence.Id;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;
import javax.persistence.OneToMany;
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;
import com.fasterxml.jackson.annotation.JsonIgnore;

@Table(name = "exibittag")
public class ExhibitTag implements Serializable {
private static final long serialVersionUID = 1L;
 @Basic(optional = false)
 @Column(name = "Id")
 private Integer id;
 @Basic(optional = false)
 @Column(name = "Name")
 private String name;
 @Basic(optional = false)
 @Column(name = "Tag")
 private String tag;
 @Basic(optional = false)
 @Column(name = "Description")
 private String description;
 @Basic(optional = false)
 @Column(name = "Created")
 private Date created;
 @Basic(optional = false)
 @Column(name = "Updated")
 private Date updated;
 @JoinColumn(name = "Location_Id", referencedColumnName = "Id")
 @ManyToOne(optional = false, fetch = FetchType.EAGER)
 private Location location;
 @OneToMany(cascade = CascadeType.ALL, mappedBy = "exhibitTag", fetch = FetchType.EAGER)
 private Set<ExhibitTagMedia> exhibitTagMediaSet;

// setters and getters removed

 public int hashCode() {
 int hash = 0;
 hash += (id != null ? id.hashCode() : 0);
 return hash;

 public boolean equals(Object object) {
 if (!(object instanceof ExhibitTag)) {
 return false;
 ExhibitTag other = (ExhibitTag) object;
 if ( == null && == null) {
 return super.equals(other);
 if (( == null && != null) || ( != null && ! {
 return false;
 return true;

 public String toString() {
 return "Exibittag[ id=" + id + " ]";



Spring will create the query for FindByTag() automatically

package com.tundra.dao;
import java.util.List;
import org.springframework.transaction.annotation.Transactional;
import com.tundra.entity.ExhibitTag;

public interface ExhibitTagDAO extends JpaRepository<ExhibitTag, Integer> {
 List<ExhibitTag> findByTag(String tag); 


The service layer is how the controller will interface with the server.

package com.tundra.service;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.tundra.dao.ExhibitTagDAO;
import com.tundra.dao.ExhibitTagMediaDAO;
import com.tundra.dao.OrganizationDAO;
import com.tundra.entity.ExhibitTag;
import com.tundra.entity.ExhibitTagMedia;
import com.tundra.entity.Organization;
import com.tundra.response.ExhibitTagSummaryResponse;

public class TundraServiceImpl implements TundraService {
 ExhibitTagDAO exhibitTagDAO;
 private OrganizationDAO organizationDAO;
 ExhibitTagMediaDAO exhibitTagMediaDAO; 
 /* (non-Javadoc)
 * @see com.tundra.service.TundraService#findAllOrganizations()
 public List<Organization> findAllOrganizations() {
 return organizationDAO.findAll();
 /* (non-Javadoc)
 * @see com.tundra.service.TundraService#findOrganization(int)
 public Organization findOrganization(int id) {
 return organizationDAO.findOne(id);
 public List<Organization> findByName(String name) {
 return organizationDAO.findByName(name);
 public List<Organization> findByNameAndCity(String name, String city) {
 return organizationDAO.findByNameAndCity(name, city);
 public ExhibitTag findByTag(String tag) {
 ExhibitTag et = null;
 List<ExhibitTag> list = exhibitTagDAO.findByTag(tag);
 if( list != null && list.size() ==1){
 et = list.get(0);
 return et;

 public List<ExhibitTag> findAllTags() {
 return exhibitTagDAO.findAll();
 public ExhibitTagMedia findMediaByTag(String tag) {
 ExhibitTagMedia media = null;
 List<ExhibitTagMedia> list = exhibitTagMediaDAO.findByExhibitTag(tag);
 if( list != null && list.size() ==1){
 media = list.get(0);
 return media;
 public ExhibitTagSummaryResponse findSummaryByExhibitTag(String tag) {
 ExhibitTagSummaryResponse summary = null;
 List<ExhibitTagSummaryResponse> list = exhibitTagMediaDAO.findSummaryByExhibitTag(tag);
 if( list != null && list.size() ==1){
 summary = list.get(0);
 return summary;


The controller layer  represents the REST layer. The mobile app will interface with the server via the controller.

package com.tundra.controller;
import java.util.List;
import javax.servlet.http.HttpServletResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import com.tundra.entity.ExhibitTag;
import com.tundra.entity.ExhibitTagMedia;
import com.tundra.response.ExhibitTagSummaryResponse;
import com.tundra.service.TundraService;

public class ExhibitController implements Serializable {
private static final String ERROR_PREFIX = "Whoops : ";
 private static final long serialVersionUID = 1L;
 private TundraService tundraService;
 @RequestMapping(value="/{tag}", method=RequestMethod.GET)
 public @ResponseBody ResponseEntity<?> getExhibitTagByTagId(HttpServletResponse httpResponse, @PathVariable(value="tag") String tag) {
 try {
 return new ResponseEntity<ExhibitTagSummaryResponse>(tundraService.findSummaryByExhibitTag(tag),HttpStatus.OK);
 } catch (Throwable t) {
 return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR);
 @RequestMapping(value="/media/{tag}", method=RequestMethod.GET)
 public @ResponseBody ResponseEntity<?> getExhibitMediaByTagId(HttpServletResponse httpResponse, @PathVariable(value="tag") String tag) {
 try {
 return new ResponseEntity<ExhibitTagMedia>(tundraService.findMediaByTag(tag),HttpStatus.OK);
 } catch (Throwable t) {
 return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR);
 @RequestMapping(value="/list", method=RequestMethod.GET)
 public @ResponseBody ResponseEntity<?> getExhibits(HttpServletResponse httpResponse) {
 try {
 return new ResponseEntity<List<ExhibitTag>>(tundraService.findAllTags(),HttpStatus.OK);
 } catch (Throwable t) {
 return new ResponseEntity<String>(ERROR_PREFIX + t.getMessage() ,HttpStatus.INTERNAL_SERVER_ERROR);


With the server code in place its time to look at the mobile app.

As started earlier we are using the Ionic framework. It is based on Javascript/Angular.

The structure of a Ionic project is shown below.


The areas we change are app.js, controller.js and service.js. Index.html is modified only slightly to include our files.

<!-- cordova script (this will be a 404 during development)-->

<!-- your app's js -->

The templates folder holds the html files for the various screens. Since we started with a tabbed ionic project we have two core html templates, tab.html and tab-dash.html.The tab format allows tabbed pages as a navigation. We are not using this format and  will be renamed later on.


<ion-tab title=”My Docent” icon-off=”ion-ios-pulse” icon-on=”ion-ios-pulse-strong” href=”#/tab/dash”>
<ion-nav-view name=”tab-dash”></ion-nav-view>

The main screen is in tab-dash.html

 <ion-header-bar align-title="center" class="bar-stable">
<h1 class="title">Available Exhibits</h1>
</ion-header-bar> </ion-content>

The screen is very basic.


The other screens are used to represent the media types, text,video,and audio.html. Here is an example of a text view.


The app.js file is loaded first and sets up the basic structure. The application uses the Ionic Bluetooth Low Energy (BLE) Central plugin for Apache Cordova . If the app is running on a real mobile device(not in a browser on a PC) the object ‘ble’ will be defined. On a PC this will not be valid. The app.js  run function will check for this.

 if(typeof(ble) != "undefined"){
      function () {
        document.getElementById("bleStatus").style= "color:green;";
     },function () {
         document.getElementById("bleStatus").style= "color:red;";


The controller layer manages the controls from the html(UI) code.

Example: It the main html file there is a button to start scanning.
<button ng-click=”startScanning()” class=”button”>Search</button>

In the controller there is the startScanning function. The BLEService is located in the service layer.

$scope.startScanning = function () {
    BLEService.connect(function(exibitTags) {
    $scope.exibitTags = exibitTags;
   $scope.myText = "startScanning";
    isScanning = true; 

In the service layer.

.service("BLEService", function($http){
  if (typeof(ble) != "undefined") {
     ble.scan([], 30, onConnect, onError);

The onConnect function returns a list of Bluetooth tags  located.

Once the list of devices is returned, the REST service is called to check the tags against the database. The server returns:

  • Organization Name
  • Location Name
  • Exhibit TagName
  • Exhibit TagId
  • Exhibit Tag
  • Exhibit Tag MimeType

The user selects which exhibit they want to view.

Testing the app locally

Ionic can run the app locally by using the command ‘ionic serve’ from the project folder.

C:\Users\rickerg0\workspace\Tundra>ionic serve
The port 35729 was taken on the host localhost - using port 35730 instead
Running live reload server: http://localhost:35730
Watching: www/**/*, !www/lib/**/*, !www/**/*.map
√ Running dev server: http://localhost:8100
Ionic server commands, enter:
 restart or r to restart the client app from the root
 goto or g and a url to have the app navigate to the given url
 consolelogs or c to enable/disable console log output
 serverlogs or s to enable/disable server log output
 quit or q to shutdown the server and exit

The basic screen as viewed in FireFox.


Deploy the app to an Android device from Windows

Make sure the device is connected via the USB port. Also set the developer option on the device. If you don’t do this last step the device will not allow Ionic to connect.  From the terminal issue the command ‘ionic run android’. This will build the apk file and install it on the device.



Posted in Uncategorized | Leave a comment

Podcast Interview with Greg Ricker

My podcast with Rik Van Bruggen

Posted in Uncategorized | Leave a comment

Building BB8

Soon after the latest Star Wars movie came out, Sphero introduced its model of the BB-8 robot.


Soon afterward people were taking it apart to see how it worked.


Two designs emerged, the hamster cage


and single axis


A few people started posting DIY projects trying to build a “working” BB-8 robot.

I decided to try my hand at building a “working” BB8 as well. Starting in January with the goal of being ready for PortCon (Portland, ME ) in June.

The Sphere

There are three primary methods for constructing  the sphere.

  1.      Purchase a pre made plastic sphere( two halfs)
    1.      This can be expensive. There is also the issue of assembling the sphere
  2.      3D print various panels and then assemble them to form a sphere.
    1. Reading what others have said about this process its not simple. Because of the size of the panels and their complexity this is a difficult process. Besides being expensive its hard on the printer. A number of people report having to repair or replace their printers.
  3.      Construct a sphere from a material such as fiber glass.
    1. This started out to be the common method most people used. An early DIY project made this seem much simpler than it really is. This method involves covering  a ball,beach or yoga, with a paper/canvas mache mixture. The  BB8 community decided that the body is about 20 cm in diameter. The ball in the DIY project is not that big. As it turns out, finding a beach ball in Maine in January is impossible. So it was off to Amazon.


All three balls are listed as 20 cm. Hmmmm…

First attempt with paper and canvas, following the DIY project.

img_20160301_192948666 img_20160311_070328

Clearly this was not going to work. I decided to use fiber glass instead of canvas. I also found a 20 cm ball at a party store.

img_20160416_102655713 img_20160416_104108166 img_20160416_104059687

The Head

img_20160319_095557159 img_20160319_095709095 img_20160319_095811288 img_20160324_070928610

The Drive Train(part 1)

img_20160713_185919 img_20160713_185911 img_20160713_185900 img_20160616_230545 img_20160616_230535

June, Portcon  Portland Maine

Despite the drive train issues, BB8 still spoke and the light worked. So it was off to the con.


img_20160625_092812 img_20160625_205258061-1img_20160625_211230032

Its back to the drawing board with the drive system…

The new drive mechanism.

I started over with new motors, frame and servos. So far its looking a lot better.



Two videos before this all gets put back in the ball for a test run..



Posted in Uncategorized | Leave a comment

Graph of a musical groups’ albums, songs and lyrics

The Idea

Being the dad of a teenage daughter means I listen to a lot of the current music. Lady Gaga, Taylor Swift. Recently is all about One Direction. As “” recently said “One Direction owns the internet in 2015. Sometimes I hear “this is a sad song” or “this is a happy one”. What could I learn about their music using Neo4j? Could one derive any sort of sentiment from the lyrics? Could I get my daughter interested in this? Only one way to find out…

How to start

The first step was to learn more about the group. There are currently four members but for most of their albums there were five. Harry Stiles, Niall, Liam, Zayn and Louis.They have released five albums, Four, Take me home, Up all night, Midnight memories and Made in the A.M. With the help of my daughter we found a site that had the lryics to all of the songs. What I found was that while some of the song files contained information about who was singing what section, many did not. I was hoping that maybe the sentiment could be aided by knowing the singer. Maybe Harry always sings sad/ break up songs(he did date Taylor Swift). Since this information isn’t consistent I couldn’t count on it.

Song sentiment ?

I felt it was important to have the ability to track lyrics by location in the song, row and column. This way one could query “what words appear the most often at the start(0,0) of a song? How often do certain word combinations( “I” and “you”) appear on the same line? This last question could be useful in better understanding sentiment?


Tools: Python, py2neo, R and RNeo4j.

The Model

The first step was to organize the songs into files by album. Once this was done it was simple to get Python to read in a list of albums, songs titles, and lyrics(words). The graph…

I decided that a Group node would refer to a band or singer. A group would be made up of members and members were artists. For bands this is fine. I made the choice to treat single acts the same as way. So Lady Gaga or Taylor Swift would be a considered a group,member and artist.


  • Group
  • Member
  • Artist
  • Album
  • Song
  • Lyrics


  • Album BY Group
  • Lyric IN Song
  • Song ON Album
  • Member ISA_ARTIST Artist
  • Group HAS_MEMBER Member


For the gist I restricted the data to one song per album and reduced the lyrics by two thirds. Even with this there are still 581 lyric nodes. There are 232 unique words. The difference is due to words being repeated but in different locations. The word “you” is found 28 times in the five songs.

Query 1

0 rows
5641 ms
| No data returned. |
Nodes created: 602
Relationships created: 609
Properties set: 1774

Find all songs where the word “my” appears

Query 2

MATCH (l:Lyric{name:"my"})-[r0:IN]-(s:Song) RETURN,l.row,l.column

Show distinct lyrics in the song “If I Could Fly”

Showing 1 to 10 of 56 entries

Query 4

MATCH (l:Lyric)-[r0:IN]- (n:Song) WHERE =~ "(?i)said" RETURN n,l

Show all lyrics in Act My Age.

Show all artists and members for the group

Show all songs on all of the albums. For the gist there is only one song per album.

Show all albums and members for the group

Show all of the lryics for the song “Kiss you”. There are some connections of lryics to other songs. This is becuase those lryics are used in the same location. The lryic “Baby” is used in “Kiss Me” and “What makes you beautiful” in the same row and column.

A query to find songs where the words ‘I’ and “you” are on the same line. The query works well in Python since I can filter out return values of 0. This type of search will be help when looking for phrases, words on the same line.

Query 5
MATCH (l1:Lyric{name: 'I'}) --(s:Song)
MATCH (l2:Lyric{name :'you'}) --(s:Song)
RETURN CASE WHEN l1.row = l2.row THEN [l1,l2,s] ELSE 0 END


Song Act My Age










Actual line, row 3 :”I can count on you after all that we’ve been through”

If I Could Fly










Actual line, row 5 :”I hope that you listen ’cause I let my guard down”

Sentiment and R

Below is a bar chart of the top ten most common lyrics. “I” and “you” are popular.

Sentiment The last thing to consider is sentiment. Using the simple process of positive and negative words I’d like to see if one make a determination of sentiment. There isn’t a song word list that I could find so I elected to use the AFINN list. Following examples from Jeffrey Breen and Andy Bromberg I was able to get some results. I didn’t divide the songs up into training and test sets, instead I picked two songs and processed them. My daughter suggested that “Best Song Ever” would be happy and “If I could Fly” would be sad.

The process start with a query:

graph = startGraph(“http://localhost:7474/db/data/&#8221;) query = “MATCH (l:Lyric) -[r0:IN]-(n:Song{name:’best song ever’}) RETURN”

ta = cypher(graph, query)

This returned a list of lryics. Next I counted the number of lyrics that matched a positive or negative word in the AFINN list. I classified the words into “reg”, scale 1-3 and “very” scale 4-5 for both positive and neg.

Using R functions naiveBayes() and predict(). The method is very simple but the results do follow that Best Song Ever “happier” then If I Could Fly. It would be good to get One Directions opinion on this.

“Best Song Ever” reg very positive 10 3 negative 3 0

“If I Could Fly” Reg very positive 1 0 negative 4 0

One thing I noticed is that simple word matching isn’t sufficient.For movie reviews or emails this may work. Song are more complex.

Example. A happy song might have the line “I love you” while a sad song might have a line “I used to love you”. Both have the positive word “love” in them but the second line could be viewed as sad, love lost. This is where querying lyrics on the same line could help. Its more complex than matching positive and negative words.

Conclusion This was fun and I got a little Father daughter time in as well. I’d like to pursue this to see what can be done by considering phrases and connected words.

Next up: Lady Gaga

Posted in Uncategorized | Leave a comment

Like a lot of people I grew up with video games. But these were quit different from what we have today. Space invaders, Lunar lander, Missile Command and Asteroids look like cave drawings when compared to what is available today.  I have experimented with tools like LightWave and Maya but their costs are prohibitive and they are not really suited for amateur game developers. Unity 3D, on the other hand, is ideally suited for those just getting started with game development. In addition, it can easily support more complex professional games. Their recent announcement for free support for mobile applications means its time for me to make the leap.

A modern game typically requires a lot of people, mainly artists, to create scenes and characters. I can  use tools such as Blender but I am not nearly proficient enough to build the images as well as create the game. I need a game where I can leverage existing art work and just focus on the mechanics of  the game and learning Unity.

What I need is a  2D side scrolling space game. I decided on trying to replicate the Lunar Lander game.


It won’t be an exact match but instead more updated and something that fits with the Unity model. Look around in the Apple and Google app stores and you can find a number of these games. Some are 2D while others are 3D and much more realistic. I am not trying to be the next “Flappy Bird” so I don’t expect to compete with other games. Its all about the learning.

Unity 3D

A lot can bee done with Unity right out of the box. Anything that requires reacting to a user(player) in going to bring up the need to add custom coding. There are two choices for doing this, C# and Javascript.  A lot of the tutorials and examples are in Javascript so I’ll stick with this.

The Game

The point of game is to land the ship on the surface before you run out fuel and crash. In the earlier games the ship would rotate as well as translate. Correcting the rotation makes the game much more difficult to play. For this version I’ll stick with simple translation left, right, up and down. Of course there needs to be a surface to land on. A simple flat surface is boring. Adding some sort of obstacles will make it a bit more challenging.

Things to consider:

  • The ship
  • Obstacles
  • Landing
  • Movement
  • Gravity
  • Fuel
  • Crashing
  • Player controls
  • Scoring
  • Sound

The Ship

Unity can import models from many tools such as Blender and Max 3D. For a mobile game the model can not be too complex. The more detailed the poorer the game performance will be. I found a reasonably sized lunar lander model from NASA that is free to use.



In the original game the surface changed from flat to mountains. I decided to add rocks to a flat surface. In order to make things a bit more complex I added the rocks at random locations and sizes.

rocks rocks2


The rocks provide obstacles to avoid but there needs to be a ‘safe’ landing place. These are marked ‘green’ so the player can be seen. Since the rocks are randomly placed the landing places need to be adjusted as well. The process is to place a landing spot and then place the rocks. The code has to make sure the rocks are not covering the landing place and that there is enough room for the lander.

Startup code to build the scene:

Declare the rocks and landing pads

var rocks: Transform[];
var landingPads: Transform[];

Find the game object tagged GUI so that we can determine the player’s level. The landing pads are adjusted differently once the player is beyond level one.

Create the landing pads by varying the “x” value.

GUI = GameObject.FindGameObjectWithTag("GUI").GetComponent(InGameGUI);
 if(GUI.playerLevel > 1)
 startx = (GUI.playerLevel * 1.1)* 4895.0;
 startx = 4895.0;
 currentXoffset =startx + 1200*Random.Range(3,10);
 for(i =1; i < numberOfLandingPads; i++) {
 lp = Instantiate(landingPads[0], Vector3 (currentXoffset,-69.0, 514.6719), Quaternion.identity);
 lp.transform.localScale.x = 160;
 lp.transform.localScale.y = 1.1;
 lp.transform.localScale.z = 160;
 lp_locations[lp_locations_index,0] = currentXoffset;
 lp_locations[lp_locations_index,1] = (lp.transform.localScale.x*5);
 currentXoffset += (lp.GetComponent.().bounds.size.x*Random.Range(3,6));

Create a 1000 rocks. Each rock is generated in a random x location. The height of each rock is also random( y direction). The game is 2D but I am using Unity in 3D mode. For creating the rocks I am creating a 3D field. At some point I may change the game to be more 3D.  Each rock is check to make sure that it doesn’t  overlap with a landing pad. I didn’t want the code to get stuck in the overlap process so after 10 tries I give up.

for (var x = 0; x < 1000; x++) {  var breakOut=0;  do {  var index = Random.Range(0,4);  var locX = Random.Range(-50000,50000);  var locZ = Random.Range(-3000,2000);  var scaleX = 200;//Random.Range(Random.Range(5,50),Random.Range(150,200));  var scaleY = Random.Range(Random.Range(5,50),Random.Range(70,500));  if(GUI.playerLevel >2)
 scaleY = Random.Range(Random.Range(5,50),Random.Range(70,GUI.playerLevel*500));
 var scaleZ = 400; //Random.Range(Random.Range(5,50),Random.Range(50,100));
 // Debug.Log( " Creating rocks locX "+locX + " locZ " +locZ +" scaleX " +scaleX+ " scaleY " +scaleY); 
 if(breakOut > 10)
 // Debug.Log("==============breakOut++++++++++++");
 } while (checkOverlap(locX) );
 rock = Instantiate(rocks[index], Vector3 (locX, 0, locZ), Quaternion.identity);
 rock.transform.localScale.x = scaleX;
 rock.transform.localScale.y = scaleY;
 rock.transform.localScale.z = scaleZ; 
 rock.tag = "rock";

A lot of values are hardcoded simply for expedience.  Good software practice would be to use variables or contestants


Since the game has more than one or two controls it requires the addition of buttons. Keyboard controls are not an options and  multi-touch is complicated. I need to control the main engine(up), left and right thrusters and a pause button.

A ParticleEmitter is used to indicate engine or thruster action.

var engineThruster : ParticleEmitter;
var LeftThruster : ParticleEmitter;
var RightThruster : ParticleEmitter;

An audio file is played when the engine is on. While the engine button is pressed the emitter is set too true

// if the Emitter is not running then fire it
// and play the sound
// then move ship up
 if(engineThruster.emit == false)
 engineThruster.emit = true; 
function moveShip_up(){
 var dir:Vector3;
 // if we are out of fuel then do not move the ship
 if(fuelMeterCurentValue == 0)
 // update the fuel status
// get the local pos
 pos = Camera.main.WorldToScreenPoint(transform.position);
 // if the ship is higher than the screen 
 // set the velocity to 0
 if( pos.y >= Screen.height)
 // yMovement is either 1 or 0 depending on the button pushed
 // it limits movement to X or Y movement only
 // adjust the upward velocity the further away from the ground.
 // the value '200' should be replaced with ratio of the screen 
 // height
 if( (pos.y < ceiling) && (pos.y > Screen.height-200))
     dir = Vector3(0,yMovement*upwardThrust/2.0,0);
    if( (pos.y < Screen.height-200) && (pos.y > Screen.height/2))
      dir = Vector3(0,yMovement*upwardThrust/1.5,0);
     dir = Vector3(0,yMovement*upwardThrust,0);
   // add force to the ship



The assumption is that the planet has gravity. I have left the gravity setting standard as Unity sets it.


Fuel usage is adjusted when ever the engine is running. In the FixedUpdate() Unity function the fuel is adjusted:

 fuelMeterCurentValue -=fuelLossRate*Time.deltaTime;

The term  Time.deltaTime increments the fuel usage according to the FixedUpdate() rate. It is standard in Unity to do this when doing something in the fixed update call.


There are two ways to fail a landing. One is to land on rocks. The other is to land too fast. A vertical velocity indicator turns red when the ship is landing too fast. When the  ship touches the landing pad the velocity is checked. The function OnCollisionEnter() is called when two objects touch. In this case it will be the ship and either a landing pad or a rock. setting Time.timeScale to zero stops the game play. the GUI.guiMode is set to either win or lose. This will cause the correct screen to be displayed and the score to be adjusted.

 if( theCollision.gameObject.tag == "landingpad" )
   if( (theCollision.relativeVelocity.magnitude > 50.0) )
    GUI.guiMode ="Lose";
    Time.timeScale = 0;
   Time.timeScale = 0;
   GUI.guiMode ="Win";


Since this is a mobile game there needs to be buttons for the player. A single touch would work if it was to run the lander engine. Left and right translations are harder. Touch to the left of the lander could go left and the same for right.  Since the lander moves it could move under the touch point and cause the movement to change. Buttons just seem easier.

Unfortunately Unity’s UI is not straight forward.The placement and operation of a button is pretty simple. Buttons are  GUITexture components. Getting the position and sizing correct for different size devices is a challenge. There is talk that future versions of Unity will have better UI tools.

In the FixedUpDate function I test each button.

for (var touch : Touch in Input.touches)
    if (engineButton.HitTest (touch.position))
      // handle engine event
    if (leftThrusterButton.HitTest (touch.position))
      // handle left thruster event


Scoring is pretty straight forward. Land successfully and you get a point and proceed  to the next level. Crash and you have to repeat the level. At each level the landing spots get harder to find. As the level increases I need to increase the fuel(or lower the rate at which its is used).


Sound is handled from an AudioSource component.


This plays the sound once. As long as the button is held down the sound will be played over and over. Playing the sound in a loop is possible for something like background music. For sounds like the engine or  thrusters I need short burst of sound.

Screen Shots

The ship approaching a landing pad. The vertical velocity is in white and positive. This indicates that the ship is moving up at rate within the range for landing.


Since the landing pads are randomly placed I found it hard to locate them and no run out of fuel. I added a overhead view in the upper right corner to guide the player towards a landing pad.


The left corner shows the fuel and velocity levels.


The ship over the rocks. The vertical velocity is in red and negative. This indicates that the ship is moving down at rate too large to land.


Goggle Play

I decided to put the game on Goggle Play just to see how this process works.

Update: I see one person has complained that at a high level you just crash into the rocks. It could be that this is a fuel issue. The landing pads are too far away for the fuel usage rate.

Once I get the iOS version to work I’ll put it on the Apple Store as well.

Posted in Uncategorized | Leave a comment