Resolve Docker time drift between container and host machine

I’ve had a number of projects now that use Let’s Encrypt and AWS services which are time sensitive for generating things like SSL certificates or for generating signed CloudFront one time access URLs.

However, over time these services are affected by a time drift between the Docker container and the host machine and results in expired SSL certificates or signed URLs resolving with Access Denied or Authorization Failed.

In order to sync your host time to your Docker container you can simply map timezone and localtime as volumes within the container.

    - /etc/timezone:/etc/timezone
    - /etc/localtime:/etc/localtime

Building an Amazon Dash Button doorbell that sends you push notification and snapshot

Over the last few days I’ve been working away on making an Amazon Dash Button doorbell that sends you a notification through the IFTTT with a snapshot from a MotionEyeOS camera that I’ve got setup on my front door. The image snapped from the door cam is uploaded to an Amazon AWS S3 Bucket and sent in the IFTTT notification to my phone. Then I can take a sneaky peak at who has pressed my doorbell 🙂

IFTTT Notification (including the S3 link to the MotionEyeOS image)

Full tutorial to follow, meanwhile visit the repo here:

*bonus* – It even playsThe Next Generation doorbell sound when pressed!

Globalsat BU353 S4 GPS Receiver showing garbled messages

I’m working on a project using a Globalsat BU353S4 GPS receiver but I’ve had garbled messages being returned when reading the data through the serial connection. What I’ve realised is that somehow the device has switched from NMEA to SiRF. I suspect this happens if you’ve been fiddling with the settings using python 🙂

You need swap your device from SiRF mode back to NMEA using the tool below.

Globalsat BU353S4

If you tested it with GPSInfo and you see garbage characters coming out, then, the device has possibly switched to SiRF mode. To switch it back to NMEA mode, please refer to the instructions below:

1. Download and install this software:
2. Run the Sirf Demo software.
3. Connect your device, choose the correct COM port, and select the 4800 baud rate.
4. Go to Action > Open Data Source.
5. Next, go to Action > Synchronize Protocol & baud rate.
6. Then, go to Action > Switch to NMEA Protocol.
7. In the pop-up window, select the 4800 baud rate under the Baud Rate and click Send.
8. Close the Sirf Demo software.
Note: Be sure and not to click on anything else in this software, as you can render your device unusable.

Now, try your GPS Receiver with the GPSInfo Utility ( to see if the problem persists.

I came across the solution here –

AutoTDM – Automatically detect plugging in the thunderbolt cable and starting Target Display Mode

Each morning I’ve been hooking my MacBook Pro up to my iMac and having to start Target Display Mode (TDM). Every time I disconnected and reconnected the Thunderbolt cable I’d have to re-establish the Target Display Mode, along with when my MBP went to sleep. So, today I created a simple script to observe the network interfaces and detect when a Thunderbolt bridge was established and then fire off the CMD+F2 Target Display Mode shortcut on my iMac.

Check out the repo here –

I’ve also got a startup script in the project folder if you want to add it to your startup applications.

Android – Automating Testing and Store Listing Screenshots with Monkeyrunner

In this tutorial we’ll be talking about Monkeyrunner and automating tasks such as preparing screenshots for Google Play Store listings.

Monkeyrunner is a high level scripting language that is capable of automating tasks such as;

  • Installing/uninstalling apps
  • Taking screenshots and saving them to your local machine
  • Sending touch events i.e. performing gestures, pressing buttons, dragging and scrolling
  • Inputting text

There are three classes in the Monkeyrunner framework these are;

  1. MonkeyRunner – Connect to devices or emulators
  2. MonkeyDevice – Represents the device you are connected to. Allows you to perform touch events, apk installations, start activities
  3. MonkeyImage – Represents the raw image captured from the device and allows you to compare images for testing

In order to run a Monkeyrunner script you need to use the monkeyrunner compiler available in the android-sdk/tools directory, rather than python to run your python code.


In here you’ll have an executable moneyrunner which you’ll need to use to run your python scripts.

Go ahead and create a new python file, lets call it, place it anywhere you’d like. Let’s start by connecting to the device, installing your APK, taking a screenshot and saving the screenshot to our current directory.

You’ll need to tell MonkeyRunner where your apk file is so that the can install the apk. You need to run your app on a device at least once for the IDE to generate you an apk file!

You can find your APK file in the following locations:
– projectFolder/yourNamespace/bin/your.apk
Android Studio
– projectFolder/yourNamespace/build/outputs/apk/your.apk

Create a py file and name it – it should look like the following;

from import MonkeyRunner, MonkeyDevice
# Connects to the first device available through the adb tool
device = MonkeyRunner.waitForConnection()
# install the APK
# declare your package name
package = 'com.your.application'
# declare your activity that you want to start
activity = 'com.your.application.MainActivity'
# prepare the whole package + activity string
activityToRun = package + '/' + activity
# start the activity above
# take a screenshot
screenshot = device.takeSnapshot()
# Writes the screenshot to a file

Place your APK file in the same directory as your py script, move into your androidsdk/tools directory and run your file in terminal.

cd into the <androidSDK>/tools
./monkeyrunner /your/path/to/

Now check out the <androidsdk>/tools folder and you should have a screenshot1.png.

MonkeyRunner Screenshot

You can also automate gestures on the phone as well as button presses, more on this in the next post.

Using the device object call press and pass in the keycode along with UP, DOWN or DOWN_AND_UP to simulate the gesture you want.'KEYCODE_MENU', MonkeyDevice.DOWN_AND_UP)

You can find the complete list of keycodes here –

Android – Setting User Agent or Custom Headers in a Google Volley Request Object

In this tutorial I will show you how to override the getHeaders() function within the Request Object in the Google Volley Networking Library. You might want to add headers in your request to specify request types (for sending JSON), basic login, or manually setting a User Agent.

When you send your request off add a function called getHeaders to the request. Do this by adding the function after the closing bracket of the new Request Object.

Request request = new Request(
    Listener listener,
    ErrorListener errorListener) {
        public Map<String, String> getHeaders(){
        Map<String, String>; headers = new HashMap<String, String>();
        headers.put("User-agent", "YOUR_USER_AGENT");
        return headers;

This is an answer I posted about on StackOverflow about the issue.

Android – Implementing a Google Maps Search Box with AutoCompleteTextView and Geocoder API

This guide will show you how implement a similar search box to the Google Maps app, using the built in AutoCompleteTextView, the Geocoder class, a custom Adapter and a few layout files.

The final result looks like:

AutocompleteTextView with Geocoder class on Android


Step 1 – Implement a custom Adapter to add results to the AutocompleteTextView

public class GeoAutoCompleteAdapter extends BaseAdapter implements Filterable {
	private static final int MAX_RESULTS = 10;
	private Context mContext;
	private List resultList = new ArrayList();
	public GeoAutoCompleteAdapter(Context context) {
		mContext = context;
	public int getCount() {
		return resultList.size();
	public GeoSearchResult getItem(int index) {
		return resultList.get(index);
	public long getItemId(int position) {
		return position;
	public View getView(int position, View convertView, ViewGroup parent) {
		if (convertView == null) {
			LayoutInflater inflater = (LayoutInflater) mContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
			convertView = inflater.inflate(R.layout.geo_search_result_item, parent, false);
		((TextView) convertView.findViewById(;
		return convertView;
	public Filter getFilter() {
		Filter filter = new Filter() {
			protected FilterResults performFiltering(CharSequence constraint) {
				FilterResults filterResults = new FilterResults();
				if (constraint != null) {
					List locations = findLocations(mContext, constraint.toString());
					// Assign the data to the FilterResults
					filterResults.values = locations;
					filterResults.count = locations.size();
				return filterResults;
			protected void publishResults(CharSequence constraint, FilterResults results) {
				if (results != null &amp;&amp; results.count &gt; 0) {
					resultList = (List) results.values;
				} else {
		return filter;
        private List<GeoSearchResult> findLocations(Context context, String query_text) {
		List<GeoSearchResult> geo_search_results = new ArrayList<GeoSearchResult>();
		Geocoder geocoder = new Geocoder(context, context.getResources().getConfiguration().locale);
                List<Address> addresses = null;
                try {
                     // Getting a maximum of 15 Address that matches the input text
                     addresses = geocoder.getFromLocationName(query_text, 15);
                     for(int i=0;i<addresses.size();i++){
                          Address address = (Address) addresses.get(i);
                          if(address.getMaxAddressLineIndex() != -1)
                         	geo_search_results.add(new GeoSearchResult(address));
                } catch (IOException e) {
                return geo_search_results;

This adapter class will handle the Geocoder search request as well as filtering and drawing the results in the dropdown menu. You’ll need to go ahead and create a new class in your project named GeoAutoCompleteAdapter.

The magic happens in the getFilter function which fires off the Geocoder request and returns the results.

Step 2 – Create a custom class to handle the result

I’ve used a custom class here to handle the Geocoder address result so that if you wanted to create some logic to perhaps format the string before it’s returned to the array adapter you can do so.

public class GeoSearchResult {
	private Address address;
	public GeoSearchResult(Address address)
		this.address = address;
	public String getAddress(){
		String display_address = "";
		display_address += address.getAddressLine(0) + "\n";
		for(int i = 1; i < address.getMaxAddressLineIndex(); i++)
			display_address += address.getAddressLine(i) + ", ";
		display_address = display_address.substring(0, display_address.length() - 2);
		return display_address;
	public String toString(){
		String display_address = "";
		if(address.getFeatureName() != null)
			display_address += address + ", ";
		for(int i = 0; i < address.getMaxAddressLineIndex(); i++)
			display_address += address.getAddressLine(i);
		return display_address;

Step 3 – Create the list item view for the results from the AutocompleteTextView

This view will be used for each result returned from the Geocoder request. Feel free to redesign this as you see fit.

Save this under res/layout/geo_search_result.xml

<?xml version="1.0" encoding="utf-8"?>
<TextView xmlns:android=""
    android:padding="10dp" />

Step 4 – Create a custom AutoCompleteTextView so you don’t spam the Geocoder

With a normal AutoCompleteTextView the filter function fires on every keypress. If you did this with an address on the Geocoder API the requests will be throttled and no results will be returned. So implementing a custom AutoCompleteTextView you can Override the performFiltering where we can use a Handler thread which fires after a delay period. If a request is fired before the Hander is complete the initial request is killed and a new request is formed.

Create a new class in your project named

public class DelayAutoCompleteTextView extends AutoCompleteTextView {
    private static final int MESSAGE_TEXT_CHANGED = 100;
    private static final int DEFAULT_AUTOCOMPLETE_DELAY = 750;
    private int mAutoCompleteDelay = DEFAULT_AUTOCOMPLETE_DELAY;
    private ProgressBar mLoadingIndicator;
    private final Handler mHandler = new Handler() {
        public void handleMessage(Message msg) {
            DelayAutoCompleteTextView.super.performFiltering((CharSequence) msg.obj, msg.arg1);
    public DelayAutoCompleteTextView(Context context, AttributeSet attrs) {
        super(context, attrs);
    public void setLoadingIndicator(ProgressBar progressBar) {
        mLoadingIndicator = progressBar;
    public void setAutoCompleteDelay(int autoCompleteDelay) {
        mAutoCompleteDelay = autoCompleteDelay;
    protected void performFiltering(CharSequence text, int keyCode) {
        if (mLoadingIndicator != null) {
        mHandler.sendMessageDelayed(mHandler.obtainMessage(MESSAGE_TEXT_CHANGED, text), mAutoCompleteDelay);
    public void onFilterComplete(int count) {
        if (mLoadingIndicator != null) {

Step 5 – Add the custom AutoCompleteTextView to your Activity’s layout file

In here I’ve used an X icon that allows the user to remove the current text inside the AutocompleteTextView. You can download this icon from here.

When you include your new custom component, use your own package name (e.g.

Save this in your Activity’s layout, it contains the AutocompleteTextView and an ImageView containing the X icon for removing the text.

<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android=""

Step 6 – Assemble the components together

Add this code to the activity you wish to include the AutocompleteTextView and you should now have a Google Maps search box in your app!

        private Integer THRESHOLD = 2;
	private DelayAutoCompleteTextView geo_autocomplete;
	private ImageView geo_autocomplete_clear;
	/** Called when the activity is first created. */
	public void onCreate(Bundle savedInstanceState) {
	    geo_autocomplete_clear = (ImageView) findViewById(;
	    geo_autocomplete = (DelayAutoCompleteTextView) findViewById(;
	    geo_autocomplete.setAdapter(new GeoAutoCompleteAdapter(this)); // 'this' is Activity instance
	    geo_autocomplete.setOnItemClickListener(new AdapterView.OnItemClickListener() {
	            public void onItemClick(AdapterView<?> adapterView, View view, int position, long id) {
	                GeoSearchResult result = (GeoSearchResult) adapterView.getItemAtPosition(position);
	    geo_autocomplete.addTextChangedListener(new TextWatcher() {
            public void onTextChanged(CharSequence s, int start, int before, int count) {
            public void beforeTextChanged(CharSequence s, int start, int count, int after) {
            public void afterTextChanged(Editable s) {
            	if(s.length() > 0)
	    geo_autocomplete_clear.setOnClickListener(new OnClickListener(){
			public void onClick(View v) {
				// TODO Auto-generated method stub

The majority of this codebase was developed by Alex Melnykov, I’ve just implemented the Geocoder search on top of this and added in an icon to clear the search box and format the results.

Safe Places to Fly Your Drone

Drone enthusiasts are being urged to help design a new ‘safe-to-fly’ app idea, created for flyers to share their favourite sites all over the world. The creators are looking to crowdsource input and opinions to help them progress the idea, with the design stage set to start next month.

An increasing number of consumer multirotor aircraft, UAVs or drones are being bought and flown around the world. However, there is some confusion over how and where you can legally fly drones, with both manufacturers and airspace regulators keen to educate people on this issue. Users need to be aware of any prohibitions or local laws; in the UK, for example, you can’t fly over or within 150 metres of congested areas or within 50 metres of people, vehicles or structures of buildings not under your control with a camera drone.

Once it is built, the app will allow flyers to share appropriate and legal sites they have found for flying, wherever they are in the world. This will enable drone users to find easily accessible public right of ways for taking off from, beautiful scenery in open airspace and large, unobstructed areas perfect for flying.

The idea is being hosted on, a new service released by Newcastle University, which enables anyone to propose, design and develop a mobile application. No previous experience is required in app design, with the site aiming to make the process fun and simple. People can get their friends and fellow enthusiasts involved in supporting the concept and promoting the idea, which is then automatically generated by the AppMovement service and released on the Apple App Store and Google Play Store.

Simon Newton, the creator behind the idea, commented: “The drone app will help hobbyists around the world find safe and legal places to fly, by bringing together the different experiences of drone users. It is important to show the world that we don’t all buzz airports, peep into windows or fly like reckless kids!

 “We’ve had great support for the idea so far and the development will be starting soon. However in order to make this free app a success, we need a good user base with plenty of people interacting with the app. This is a great opportunity for drone lovers to be part of the development of an exciting new app, set to enhance the flying experience.”

To take part in the done app’s design phase, you must register your support at

App Movement – A new crowd commissioning platform for mobile apps

I’ve been working on a new platform,, along side some of the chaps in the lab which enables anyone to propose, design and develop a mobile application without the need to code. You can get your friends involved in supporting the idea and design it together by submitting contributions and voting on each others ideas. It’s like Kickstarter for apps and instead of money just need to get friends to show their support for the idea.

It works through a templating system which means we can quickly develop new apps with differing styles and content with a different community behind each one. At the moment you can create Location Based Review Apps – TripAdvisor style rating  apps for places worldwide. We’ve found this to be very successful and empowering to communities already in the Feed-Finder mobile app which allowed breastfeeding mothers to rate and review local business for how friendly and accepting they are of breastfeeding in public.

We’ve seen some great ideas already on the platform such as an app to find Dementia Friendly Places, Best Place to Fly Your Drone, and Gluten Free Restaurants to just name a few.

If you want to create your own mobile application visit and start your App Movement today.

SocialEngine Tutorial – Creating widgets for user profiles, getting the subject rather than the viewer information

I’ve been working on creating a widget for a user’s profile however I’ve needed to find out who I’ve been viewing in order to select the correct data.

I made a post about getting user details here however it only told you how to get the current viewer’s data. In this post we’ll be fetching the data for the current person being viewed by the user.

Lets say we have 2 users – alice and bob. We have a widget that fetches a list of websites that the user has viewed recently.

When Alice looks on Bob’s profile we need to run a query which uses Bob’s user_id so we can go and fetch the data about him and display the results to Alice.

What we have here is a viewer -> subject relationship whereby a viewer (Alice) is looking at the subject (Bob) and we need his ID rather than Alice’s (the viewer’s) id.

In order to get the current subject’s user details we need to do 2 things;

First things first, we need to check that we have a subject otherwise our page will error;


If this evaluates to true then we can do the following;


Now the complete code to get the current subject’s user_id is as follows;

    $user_id = Engine_Api::_()->core()->getSubject()->getIdentity());
} else {
    echo 'no subject on this page';

If you want to know what methods are available for the subject object you can call the ‘get_class_methods()’ function on the subject;