Wednesday, March 26, 2014

Organizing Images in Date named folders

As everybody, I take a lot of photos with my mobile devices. As my last devices came with a pro Dropbox account of 50Gb, I have "Camera Uploads" enabled on all of them. Currently I have ~1500 files in my "Camera Uploads" folder with a total size of 14GB (Videos take most part of the total size). As Dropbox only uploads the images and videos to the folder without any organization, all files are on the root of "Camera Uploads" folder. There is no need to say that is painful to open that folder from a Desktop, and it takes some time, creating thumbnails and sorting files. I've adapted a script which was solving this problem to my needs.

The original script, EXIFMover.py, can be found at github as a gist. It sorts the media files into folders named after the device that has created the image.

What I wanted was a more traditional scheme based  on dates. I wanted the traditional /YEAR/month scheme.

So I forked EXIFMover.py and modified it. It can be found also as a gist in github (EXIFMover.py). What it finally does is:

  • Checks all files on current folder (uncommenting and changing line 43, it can parse a specific folder)
  • Checks if file has a valid extension (png, jpg, jpeg, mp4)
  • Checks if it can extract date from file name (Search for "YYYY-mm-dd")
  • If date can't be found on file name, it tries to extract date from exif data (if ExifRead is installed)
  • If a date has been found it moves the file to folder/YEAR/month, creating the path if it doesn't exist
It is quite fast, as it has moved all my files in less than a second on Ubuntu. After that, Dropbox has to sync all the contents and it takes some time.

To use it on your folder full of images:

  • Download the script from here into the folder where you have your images
  • issue python EXIFMover.py on the command line
To use it, you need python installed. For more information check the comments on the script. An important one is this:

# This is experimental and one-way in a destructive sense, I take no responsibility
# if this absolutely destroys your directory structure for some reason

The gist:


Thursday, March 6, 2014

Energy monitoring at home

I live in Spain, where our lovely politicians have raised the electricity bill substantially. Without taking into account taxes, the electricity bill has two main factors: consumption and contracted power (Maximum KiloWatts you can use). At first, consumption cost started to raise, but people in Spain started to spend less power, so politicians raised the cost of contracted power. I have 9.2KW contracted, which is quite high so I wanted to lower it to lower my electricity bill.

I made a guess of what could be my power use, but I was afraid that if I did reduce my contracted power too much I will suffer from continuous cuts. What I did was to spend some money in an electricity power monitoring for my house. After a 10 minutes check  of the possibilities on internet I bought a kit from efergy.com, the engage e2 hub kit.

This kit contains: a current clamp sensor, a portable display unit, an emitter and an engage hub gateway. 
The clamp sensor, the emitter and the hub

Translated this is a sensor, an emitter which propagates the sensor read data to a display and a hub, which sends the data to the efergy web page. With this kit I can track my consumption either on the display or in a web page:



What made me choose the efergy solution in front of others was that I would be able to track my consumption with the display or the web, and that I would be able to access the data.

The installation was trivial as you only have to put the clamp around the line you want to monitor, put the batteries to the emitter and connect the clamp. Fortunately my distribution panel had empty space. Neither the clamp nor the emitter are bulky, but usually this panels are quite crowded.

First days I was constantly checking the display and I discovered several things about my house. I discovered that my water heater switch on and off along all day, how much power does use the oven, the microwave, the stove,etc. 

I'm quite happy with this purchase because it fulfills all my requirements, but I have some "it would be better if...":
  • I would like to monitor some lines separately. My distribution panel has several lines (water heater, sockets, kitchen,etc). I would like to be able to have several clamps and monitor them separately with a single emitter and hub.
  • Efergy has a complete solution to read a sensor, display it and store it on the cloud. Why do they do it only for power consumption? Why don't they sell temperature sensors? A temperature controller like nest? Any other kind of sensor or simple controller towards a smart house? They have the base to do a lot of things and I am completely sure I am not the only nerd who would like to have a chart of the temperatures of my rooms.
  • The android application can be used, but is far away from being usable, and it doesn't provide much information.
  • The web page provides information, but not much. I would like to have real statistics of my usage. As I can download monthly usage with minute resolution I've done my own script to get the data I want (next post I will show it). But, as they have the data, a webpage and a way to show charts they should show something more than the monthly usage. Why they don't suggest the best rate I could get in my country according to my usage? In my opinion that would be a basic feature (that they can monetize if they provide customers to companies). They don't even tell me if according to my usage it is better to have a peak/valley rate or a single rate contract. What about the peak power consumption?
  • Having an android application they could have an alarms service: notify me if there is an anomalous usage so I can check if something is wrong.
In my opinion their platform has a lot of potential, but they are using only a small part of it.



Thursday, January 23, 2014

My two cents on Titanium appcelerator

Four months ago I managed to get some time at night to develop an app. I had in mind a very simple app, because what I wanted was to get some experience in app development and also gain experience in app stores (How users interact with app developers, how they react at some type of changes in the app, etc). But in order to have that experience I needed the app. Of course I wanted to embrace everything I could and took a look at app frameworks that with one code base provide apps for all platforms (iOS and android were the minimum to get).

I took a look at several frameworks: Titanium from appcelerator, phonegap, etc. I had some requirements:

  • User base: I didn't have experience on mobile app development, so I didn't want to be opening the lead in a new platform.
  • Good documentation: Again, I am a rookie on this.
  • Native applications: I wanted the full power, sure.
  • Easy to learn: I didn't want a stepped learning curve. I was in a moment were I needed fast results. I was going out from a long project which failed and needed some moral boost having something good and fast.
Having this into account I choose Titanium. It had a lot of developers and a lot of apps developed, an extensive documentation on his page, several books, provided native apps and javascript seemed easy to learn.

At first it was like a love history. Everything was perfect. There was some lack of libraries and its marketplace was a little bit disappointing, but I found tutorials and github repos were I could find everything I needed. 

I got really fast results. Without previously knowing the framework I got a working version of my pomodoro technique application in less than two weeks. It was not ready to be published as it had several drawbacks and bugs, but it was running and it was nice.  And it was working either in my android devices and in a iOS simulator. Great!

Then the problems came. First I abandoned being multiplatform. Yes, it can provide versions for several platforms, as long as you insert as much "if (Ti.Platform.osname === 'iphone') ... else ...". So I was not having one code base, I was having two or more, but harder to maintain than if they were really two code bases. 

Once I left being multiplatform I focused on develop it only for android (I have android devices to test the app and for iOS development I had bought a cheap second hand mac mini that is not comparable to my linux desktop, my linux desktop rocks!). The problem is that I was using receipts in most of my code. I wanted to show a notification: a lot of Ti.Android that I didn't understand what were they doing, but they were working. I didn't knew most of the android concepts (Activity, Intent, View, etc) but I was using them. This lead to frustration. 

It was clear that I had to learn a minimum of the platform I was developing for. I was using Titanium to learn only one framework, but if you don't know were it is running, you won't master Titanium.

My app was simple but it generated a big apk. As big as 7MB once I reduced all images and did my research on this topic. Come on, it is a timer, only javascript files, xml and some icons, how can that be a 7MB apk. People with cheap phones some times can't afford that, and people with high end phones either doesn't like big apps (then we may download Fifa which takes 1GB, but if a simple app takes so much space it is a bad sign in my opinion). I learned that in order to execute the javascript logic they were embedding inside the apk a javascript engine. It makes sense, but the result is a bulky apk.

A serious problem: my app was eager in resources. At first I was updating my timer every 100ms , but it was using between 50-60% of my Galaxy S3 resources. Ok, less refreshments, update once each half a second, just to be sure the timer doesn't loose a tick: 15-25% of cpu use! This is too much in my opinion. Most of the apps, even complex ones doesn't spend that amount of resources. It could be my fault, maybe I was doing something wrong, but trust me I tried everything I could to solve it.

And finally the problem that was driving me crazy. It didn't matter what I did, My app was not as responsiveness as it should. Sometimes it picked all my onClick events, sometimes it didn't. Same apk in two different phones, everything fine on my S3, 50% fails in another one. Here I started to see ghosts. Again, it could be me, but an onClick is an onClick.

My app behaviour remembered me when I started to use Wunderlist. Now Wunderlist works perfectly without a single hiccup, but when I started using it, it was not always responsive. So nice, but not very responsive. You had to check twice when ticking a list item and sometimes you had to tick it several times to check the item. I kept using Wunderlist because I had some lists shared with my wife, but sometimes it was so frustrating, specially when sync issues arose or the GUI stopped responding. I knew they were using Titanium when they started. Now I've seen that they have switched to real native coded applications since they launched Wunderlist 2. I don't knew their reasons to switch.

Finally I had to open my eyes: that was not working! At each one of this problems the same question arose: What do I do, keep using something I don't know if it will fit my requirements or switch to native and start from scratch? I was afraid of starting by scratch again, learn Java, learn how a native Android app is structured, its stepping learning curve,etc. Learn it also for iOS if I wanted to port my app to it. But at the end, I took a look at it.

And know what? It is easier than I thought. For sure it has its own drawbacks, it is not perfect, but it is easy, it is as easy to style an app as it is in Titanium, but at same time it is easy to follow the android design guidelines. It is easy to keep your app under 2MB and under 3% of CPU use whatever you are doing, etc...

I am not saying that Titanium is a bad solution, my opinion is that it is an easy solution, which comes packed with a lot of features (cloud, notifications, etc) and a lot of power under the hood, but I had to switch because it was getting really hard to get the results I wanted. I think it can be used to have a really quick multiplatform solution, but paying some tolls.

I don't think this is only a problem of Titanium. In my opinion (without having tested them) each solution has its own set of problems, even native applications. 

P.D. I was using version 3.1.X. Maybe they solved most of the issues on version 3.2


Monday, October 14, 2013

Determining if a device is a tablet on Titanium Appcelerator

I'm learning how to develop mobile applications with Appcelerator's Titanium. Titanium is a framework which allows to develop native applications for both iphone and android (+ blackberry, mobile web and something called Tizen) with a single code base. As a 1 man spare time hobby it is impossible to develop the same application two times, one in Java and one in Objective-C, so in my opinion and for my case, it is a good approach.

You can define several UI's and choose which devices uses each one. The usual case is to define a phone UI and a tablet UI. The problem is that the code on the templates given by Titanium uses a poor definition of tablet.

In code:
My old Galaxy S3 has a resolution of 720x1280, so with this code it will be considered a tablet, and it's not even a phablet nowadays,

In my opinion instead of selecting a random value for the screen resolution to set the borderline between tablets and phones it is better to use the real screen size. I'm using a value of 7 inches to set it as a tablet. My code:

I've seen some similar codes, but they use 'dpi' instead of xdpi and ydpi. Devices doesn't have the same dpi in x and y and when the screen density is big it can make the difference. For example in my Galaxy Note 8, using dpi sets its screen size to 7 inches, but using xdpi and ydpi it returns the correct value of 8 inches. That won't be a problem in this case but probably for a Tablet of 7 inches it will return a value smaller than 7 and won't be considered a tablet.

Tuesday, October 1, 2013

Colored tail in python

At work we are developing a rather complex system implemented on python. It is a distributed system with a central node which is in charge of coordinating the rest of the modules. It is implemented completely in Python and we take advantage of the Python logging features.

When I have to debug the integration of some module into the server I end having several terminals where a tail -f path/to/log/file command is running. But it is usually hard to find when your application generates a lot of information.

I've searched how I could highlight a word and found this script:

 tail -f file.log | perl -pe 's/keyword/\e[1;31;43m$&\e[0m/g'

where you change keyword by what you want to highlight. This command is too complex to remember (I've never learnt perl) and type, so I developed my own colored tail in python.

You can find as a gist at: ctail gist

If you download it somewhere in your PATH as ctail (removing .py) and set it as executable (chmod u+x) you can execute it like:

ctail path/to/your/log keyword1 keyword2 ... keywordN

Main difference with tail -f command is that it does not show last 10 lines by default, it only shows new inputs, and (now, it does) I guess that my script is slower than tail, I have not checked it.

It has an optional flag, -d, which colors any datetime in the format 'yyyy-mm-dd HH:MM:ss' found.

By default it shows last 10 lines, but it can be changed using -l modifier.

Another good point of this coloring tail is that it accepts regular expressions. As a simple example:

ctail -d /var/log/an_example.log ab_.* exe_.*

gives as output:


Datetimes are colored in red because we provided the -d option, and then both regular expressions are applied to look what to color.

P.S. I know, I have defined a lot of colors but I only use some of them, I just made a copy paste from a bash script and I wanted this list of colors to be saved somewhere. Here it is.


2013.10.03 - edit: I've updated the code as it only worked on Python2.7+. I've also added functionality to show last 10 lines of the file.

Wednesday, September 25, 2013

Adding data to graphite in Python

Graphite is a realtime graphing solution. Check the previous post to know more about it. Graphite is usually used to check server farms status, but it can be used to monitor any variable on our programs as long as it is a number.

Carbon is the Graphite component in charge of receiving time series data and storing it into whisper database. It allows to receive data through 3 protocols:
  • plain text protocol: specially formatted string to a socket
  • pickle format: pickle of a python list of data to a socket
  • AMQP: data received through a message bus via AMQP
As I am going to explain how to add data to carbon in python I will use the second protocol, as it allows to send as much metrics as we want in a single access. Plain text protocol forces to open a socket connection to carbon for each point of each metric. Using the pickle format, several metrics can be embedded in a single packet and send them in one access to the carbon daemon.

I've developed a class which implements the interface to a carbon daemon which uses the pickle protocol. It can be found as a gist on github: carboniface.py

At the end of the file there is an example on how to use it. The carboniface.py code:

Friday, September 20, 2013

NDB structuredProperty costs

When adding new instances on NDB datastore:
  • if using a ndb.StructuredProperty(), each write of the class costs two times the number of fields of the structured property
e.g:
As ProviderNDB has 3 properties and testNDB has one structuredProperty of type ProviderNDB and another property:
  • p.put() costs 8 operations
  • t.put() costs 10 write operations
Written with StackEdit.