Wednesday, September 11, 2013

What Graphics Card Fit In Shuttle XPC SZ77R5

Earlier this year (2013) I bought a mini pc that will be dedicated for serving plex media server. I got the ShuttleXPC SZ77R5 because it was small enough for me and it works with the Intel Core i7-3770K since this need to handle multiple 1080p streams on my home network on different devices at the same time. The thing that I did not plan for is the video card. So I can make it a dedicated gaming pc for the most recent games and run it flawlessly. But these kind of cards are huge and usually won't fit this case without modifications.

So after some research I got this card,  Sapphire Radeon HD 7970 OC with Boost 3GB DDR5 DL-DVI-I/SL-DVI-D/HDMI/DP PCI-Express Graphics Card 11197-03-40G which barely fits and when you close the case it would probably overheat. Also you will have to remove the hard drive/cd rom rack to fit this card. I moved my hard drive on top of the case, there is a bunch of holes there where you can just attach it in. Then the hard part was cutting the case to have an opening in same place where the fan of the graphics cards are. I never really tried running it without that opening so I can't say if it would overheat.

It doesn't look that good now with the hole but it works fine.


For anyone that is interested on what else is in there here is the complete list specs.

Wednesday, August 14, 2013

Host Static Website on Google App Engine But With Flexibility of Templating System

If you already know how to use google app engine you can skip to the github project and the short summary will be enough for you to get started on creating static websites. I created this because google sites will stop supporting google adsense and I have a bunch of static sites for my projects that is using google adsense.


If you are new to google app engine and would like a way to create static websites with ability to do stuff like server side includes to avoid duplicating your headers/navigation and everything that you can do with jinja2 templating system and starts your hosting for free with daily quota and competitive pricing, by the time of this writing is (1GB/day free and 12cents/GB after).

First you need python 2.7 installed in your system, you can download at:
Choose the one for your system, then install google appengine sdk at:
The last thing you'll need is the github project above or download zip at:

Once you got all the things in place, install python, extract/install app engine sdk then extract master.
On mac and/or maybe windows there is a GUI where you can simply add the project extracted "app-engine-static-master" to on the appengine sdk and run it for testing. You should go to:


change the port on where it is running, and /dev/ is where all your dynamic website is shown, whatever you see here is what will be generated static files under gen folder. Now you can start building your website inside the templates/ directory, however you structure this will be how it will serve on your url.


templates/
    _layout/main.html (mapping is ignored on underscore(_) prefixes)
    docs/tutorial1.html (url maps to /docs/tutorial1.html and if hide_html_ext = True on generate.py it maps to /docs/tutorial1)
    about.html (maps to /about.html)
    index.html (maps to / your homepage)

The "hide_html_ext" will just generate the files under a folder and index.html to emulate the clean url designs but by default its using the .html which should be just as good. You would just usually touch the templates directory and then run:

python generate.py

Then it will create static files under gen folder which should serve directly from your running development server, when everything looks good deploy your app, if you need to modify anything you'll have to do it in templates/* then re-run python generate.py then redeploy your app.

For more info on how to make use of jinja2 check their official site:
http://jinja.pocoo.org/docs/templates/

Other app-engine-static specific features includes:

Jinja2 Filters:
{{ link('/about.html') }} - use this for any linking between pages

generate.py variables:
use_index_paths - hides index.html on generated static files
hide_html_ext - hides all .html and generated files are all under folder/index.html

static/ - should contain all your other non-generated static files such as css/js/images and other files
Friday, July 12, 2013

Cookie-less Domain and Static Files versioning with Google App Engine

Cookieless domains are one of the must have optimization when you are serving a lot of static files. The reason is that if you have a cookie in the same domain it sends those cookies in the request headers if you are serving your static files on the same domain. Those build up and unnecessary. If you have your cookie in domain level and using appspot domain for your main domain this won't work. This will also version your files so in your app.yaml you can leave (default_expiration: "30d") all the time. Each deploy will prefix your static urls with different version. Here is how I did it:


import os
from google.appengine.api import app_identity

VERSION_ID = os.environ.get('CURRENT_VERSION_ID', '1.1').split('.')
VERSION = VERSION_ID[0]
APP_VERSION = int(VERSION_ID[1])
APP_ID = app_identity.get_application_id()

IS_DEV = os.environ.get('SERVER_SOFTWARE', 'Development/%s' % VERSION_ID).startswith('Dev')
IS_BACKEND = backends.get_backend() is not None

def static_url(num=0):

    if not IS_DEV:
        return 'http://%s' % '.'.join([
            str(num),
            str(APP_VERSION),
            VERSION if not IS_BACKEND else 'cdn',
            app_identity.get_default_version_hostname()
        ])

    return ''


Then in all your static js/css etc prefix all with static_url(1) 2, 3 so it loads in parallel. Modern browsers should now load stuff in parallel but still helpful. The reason this works is cause in app engine you can have multiple versions of your app serving in appspot.com like (version).(app-id).appspot.com so what I did here is get the unique version of currently deployed app and prefix it like: (for-parallel-loading-number).(unique-deployed-app-version).(app-version).(app-id).appspot.com, (unique-deployed-app-version) is a version number that only changes when you do an app deployment so this is what we use for versioning the static file, so this will never change until your app is ready and deployed, but I'm not really sure if about it if this is something that can get used while other instance is not ready and request goes through an instance that is not ready yet.

This is also built-in to my app-engine-starter.
Tuesday, June 18, 2013

HTC One + I-Blason PowerGlider + Moga Pro Review

I bought the I-Blason PowerGlider battery pack thinking that it would fit in a moga pro for extended gameplay use, because there was a review somewhere in xda that the mophie doesn't fit in the moga pro. So here is a quick video to demo it altogether.


It does fit, but it can slip its heavy and hard to play while laying head up because the moga pro grip don't have a lock it might be the same issue with htc one alone. Although in the video it doesn't drop it does feel that it might, but it still fits which is the most important.

Here are more photos:



I used the moga tablet stand cause the HTC One + PowerGlider is too heavy to stay upright. For battery life, I've never ran out anymore, although sometimes I remove it and charge it by itself so I can still use the phone and never had to charge it directly. Beware though it only has half the mAh output of the charger, people say its fine but I really don't know how it would affect its battery.

Connecting the moga pro to my phone seems to disconnect the wifi, not sure if this is a common issue. It is having hard time to find the controller with wifi on. Other than that everything seems smooth, the moga is very responsive.


Friday, June 14, 2013

Google App Engine VirtualEnv Tool that is not virtualenv for Python

When I build app engine projects and I want to include python package, what I do is download the source and symlink/include it in the project. I have tried the virtualenv approach but it didn't feel clean and I couldn't find anything that will suit my needs in a simple way. So I created this new tool called gaenv which will automate installation/linking of your installed packages to your gae project, it doesn't really need to be a gae project since what it does is just create a folder of symlinks following your requirements.txt so this can be useful to any container base deployment packages that follows symbolic links.

Installation & usage:
    # Note that if you install on a system with multiple python, 
    # you need to call the correct binary on which it looks up packages
    $ sudo pip install gaenv
    $ cd to/your/gae-project
    # Create your requirements.txt & run this to install it
    $ pip install -r requirements.txt
    $ gaenv

That's it, this should create a folder named gaenv_lib on the same path you execute it on and if you said "y" on the detect insert import it will look into your app.yaml and try to add import gaenv_lib before any other import happens. You should just do it manually if you have complex  setup on your project, what it basically does is simply add gaenv_lib in sys.path. To overwrite those names you can check with gaenv -h

To people that haven't experienced using requirements.txt it's basically a list of your python packages  per line. Then using pip install -r requirements.txt just install them all. gaenv follows the same format. Here is a sample:

boto>=2.8.5
peewee
flask==0.9


You can download and install manually at
https://pypi.python.org/pypi/gaenv
it's also on github at
https://github.com/faisalraja/gaenv

Hope someone finds it useful as I did. Enjoy.

Also note that this shouldn't stop you from using virtualenv with all your gae compatible packages then just run install gaenv on that virtualenv and run the binary from it's bin folder, this way you just maintain one virtualenv.
Tuesday, June 11, 2013

Windows Process Monitor Script in Python

I have a windows desktop that is always tunneled to some server for reason that is not important. I use kitty.exe an improved putty with auto re-login and more, but only needed that feature that's why I'm using that. But for some reason it stops responding if a lot of traffic has passed to the tunnel and it has now happened too frequent that I just had to auto restart it, there might be softwares that does this already but here is what I did using python. This should be reusable with any process just change the variable names you want to monitor.


import os
import subprocess
import time

__author__ = 'faisal'

# Change this if you want to monitor a different process
process_name = 'kitty.exe'
# Separate your parameters
start_command = ['C:\Applications\Putty\kitty.exe', '-load', 'MyTunnelSession']
# Sleep time in seconds before checking again
sleep_time = 60


while True:
    # Filter a list of windows process that has stopped responding
    r = os.popen('tasklist /FI "STATUS eq Not Responding"').read().strip().split('\n')
    for p in r:
        if p.startswith(process_name):
            # It's a quick script so just go get the
            # first # valid info which is the PID
            d = p.split(' ')
            d.pop(0)
            i = d.pop(0)
            while not i:
                i = d.pop(0)
            # Kill by PID
            k = os.popen('taskkill /F /T /PID %s' % i).read()
            print k

    # Now we check if kitty.exe is running then run it if its not loaded
    r = os.popen('tasklist /FI "STATUS eq Running"').read().strip().split('\n')
    found = False
    for p in r:
        if p.startswith(process_name):
            found = True

    if not found:
        with open(os.devnull, 'wb') as devnull:
            subprocess.Popen(start_command, stdout=devnull, stderr=subprocess.STDOUT)

    print 'Running time: %s' % time.time()
    time.sleep(sleep_time)

Also let this start your process instead of having it already started.
Wednesday, May 29, 2013

Backup/Sync Your Photos to Flickr Script

IF YOU JUST WANT TO USE IT PLEASE SKIP TO INSTALL PART

If you haven't heard, flickr has made their free storage 1TB for your full size photos. That's a lot of storage for your photos. I have a collection of photos since year 2000, and I have only around 81GB of photos stored to my hard drive and time machine and another backup drive, but those can all fail but I have been lucky not to lose photos from hard drive failures. I was looking for a sync apps to flickr that I can leave alone and I couldn't find one.

So here is a quick python script that I created. Here is how it works for backing it up:

Let's say you have folder
/Volumes/Storage/Pictures/
2001
  / 2001-01-01
    / File1.jpg
    / File2.jpg
2002
2003

When you run flickrsmartsync it will require you to login give it access then it will sync it automatically, stopping and starting it will just resume it as long as it is in the same structure. So if you sync the same structure under the root folder in another computer it will sync to the same photo sets but will skip same filename. It also uses description to store paths for it since sets are single hierarchy design, everything is synced in a single level folder. So example above will create:

Set:
Title: 2001-01-01
Description: 2001/2001-01-01
Then photos File1.jpg, File2.jpg

So basically it is really designed only for backing up but doesn't stop you from really using flickr to share your photos as long as you keep the sets description in tact it will not re upload each photo.

Then to download the same format to another computer just run: flickrsmartsync --download 2008
This will create the same structure to the folder where you ran the script.

INSTALLATION
You can download directly from pypi here and run it directly with python:
https://pypi.python.org/pypi/flickrsmartsync
it's also on my github at
https://github.com/faisalraja/flickrsmartsync
Or install with pip:


    # with pip on unix systems (osx/linux)
    sudo pip install flickrsmartsync
    # if you don't have pip install it with on debian/ubuntu systems
    sudo apt-get install python-pip
    # or just download the source on pypi then cd to it and run
    sudo python setup.py install

    # Sample usage: go to the root directory you want to 
    # backup like the root of your Pictures folder then run
    $ flickrsmartsync
    # then to download to another machine or just restore deleted files
    $ flickrsmartsync --download .
    # to download specific folders
    $ flickrsmartsync --download 2008/2008-01-01
    # running from source without installation (same parameters)
    $ python flickrsmartsync-0.1.7/flickrsmartsync --download .


Windows update, since 0.1.7 version you should now be able to use this without dependencies, it's now included in the package. Here is the step by step instructions:

First download and install python here. Choose your windows version, it's tested on python 2.7, just install that for now. You can install multiple version of python. It should by default install itself in C:\Python27 now download flickrsmartsync package. If you can't extract it, here is a free open source tool 7zip. Once everything is installed extract the tar flickrsmartsync-0.7.1 in the location of your photos like your my pictures then type cmd in your start menu:

    cd Pictures
    # This should upload all photos under your pictures folder
    C:\Python27\python.exe flickrsmartsync-0.7.1\flickrsmartsync
    # To download, same parameters as above
    C:\Python27\python.exe flickrsmartsync-0.7.1\flickrsmartsync --download .

Wednesday, May 1, 2013

Simple Mapper Class for NDB on App Engine

This class is based on the db mapper found in remote_api article. But using ndb, the purpose of this is if you want to iterate through a lot of entities but not enough time to do it on request time. So this library helps you create a map of your entities of given kind.

You should use this in cases like, deleting users that requested for deletion or updating counters for specific filters.

Here is the NDB version of the Mapper. I have added a bit of improvement that I have used for in the past. -Edit- I have removed the memcache ability to stop duplicates. It should just now be handled with the task scope, like taskname or different filters per task (can be done with different initial data and overriding the get_query method and use it as some filter).


import logging
from google.appengine.ext import deferred, ndb
from google.appengine.runtime import DeadlineExceededError


class Mapper(object):

    def __init__(self, use_cache=False):
        ndb.get_context().set_cache_policy(use_cache)
        if not use_cache:
            ndb.get_context().clear_cache()

        self.kind = None
        self.to_put = []
        self.to_delete = []
        self.terminate = False
        # Data you wanna carry on in case of error
        self.data = None
        # Temporary Data that won't carry on in case of error
        self.tmp_data = None
        self.filters = []
        self.orders = []
        self.keys_only = False
        # implement init for different initializations
        self.init()

    def delete(self, entity):
        self.to_delete.append(entity if isinstance(entity, ndb.Key) else entity.key)

    def update(self, entity):
        self.to_put.append(entity)

    def map(self, entity):
        """Updates a single entity.

        Implementers should return a tuple containing two iterables (to_update, to_delete).
        """

    def init(self):
        # initialize variables
        pass

    def deadline_error(self):
        # on deadline error execute
        pass

    def finish(self):
        """Called when the mapper has finished, to allow for any final work to be done."""
        pass

    def get_query(self):
        """Returns a query over the specified kind, with any appropriate filters applied."""
        q = self.kind.query()
        for filter in self.filters:
            q = q.filter(filter)
        for order in self.orders:
            q = q.order(order)

        return q

    def run(self, batch_size=100, initial_data=None):
        if initial_data is None:
            initial_data = self.data
        """Starts the mapper running."""
        if hasattr(self, '_pre_run_hook'):
            getattr(self, '_pre_run_hook')()

        self._continue(None, batch_size, initial_data)

    def _batch_write(self):
        """Writes updates and deletes entities in a batch."""
        if self.to_put:
            ndb.put_multi(self.to_put)
            del self.to_put[:]
        if self.to_delete:
            ndb.delete_multi(self.to_delete)
            del self.to_delete[:]

    def _continue(self, cursor, batch_size, data):
        self.data = data
        q = self.get_query()
        if q is None:
            self.finish()
            return
        # If we're resuming, pick up where we left off last time.
        iter = q.iter(produce_cursors=True, start_cursor=cursor, keys_only=self.keys_only)
        try:
            # Steps over the results, returning each entity and its index.
            i = 0
            while iter.has_next():
                entity = iter.next()
                self.map(entity)
                # Do updates and deletes in batches.
                if (i + 1) % batch_size == 0:
                    # Record the last entity we processed.
                    self._batch_write()
                i += 1
                if self.terminate:
                    break

            self._batch_write()
        except DeadlineExceededError:
            # Write any unfinished updates to the datastore.
            self._batch_write()
            self.deadline_error()
            # Queue a new task to pick up where we left off.
            deferred.defer(self._continue, iter.cursor_after(), batch_size, self.data)
            logging.error(self.__class__.__name__ + ' DeadlineExceedError')
            return
        self.finish()


Then here is a sample usage:
from google.appengine.ext.ndb import blobstore

class DeleteUser(Mapper):

    def init(self):
        self.kind = User
        # Im using a generic property cause it was an 
        # expando model where I added this on their deletion request
        # then gave the user enough time to undelete with a future date.
        self.filters = [ndb.GenericProperty('deleted') <= datetime.datetime.now()]

    def map(self, user):
        # Sample usage why you want to run this in a mapper
        blobstore.delete_multi(user.photos)
        # mini batches here
        for_delete = []
        for comment_key in Comment.query(Comment.user == user.key).iter(keys_only=True):
             for_delete.append(comment_key)
             if len(for_delete) >= 100:
                 ndb.delete_multi(for_delete)
                 for_delete = []
        ndb.delete_multi(for_delete)
        # and more, the more you do here probably the best to make the batches small
        # to avoid having to duplicate runs on a failure
        self.delete(user)



You can use this on both frontend and backend instances, the 10 minute limit should be handled automatically and continue from the last successful batch. Then to run this using a deferred library or if you will run it in cron just create a handler that simply runs it:
# on a handler
deleteUser = DeleteUser()
deleteUser.run(1)  # I made batch 1 since we are doing a lot of things

# with deferred library (For someone not familiar, It's a convenient library for taskqueue)
from google.appengine.ext import deferred
# anything that starts with _ is for taskqueue api, before that is for your method
deferred.defer(deleteUser.run, 1, _target='backend_name_if_you_want', _name='a_name_to_avoid_dups')
Thursday, April 25, 2013

Summary of my Android Apps

After checking the archive list of my blog I didn't see me sharing my android apps on here. So here it is, I currently have 4 active apps mostly created for myself that I published on google play.

AppLauncher+
This app automatically organize your apps base on google play categories. Reason I built is cause I flash my firmware a lot at one time that reorganizing folders of my apps just takes too much time. I couldn't find one that is simple enough that would just work and I won't ever touch it again. It has now evolve to have features like:

  • Manual Categorization (had to do it cause of too much demand)
  • Floating launcher (for paid, you can open a folder/assign commands on what it does)
  • Create Shortcut & Folder view on those shortcut (also paid only)
  • Free version basically just gets an organized list with ads! :(

This is a live wallpaper, you can select a static wallpaper then then it will have your borders as status bar. I did this because I thought it was cool. It really wasn't from the current population. Ohh well I still use it. It can now show the status bar anywhere and features like random wallpaper and wallpaper changing depending on your battery level.

Another app that I use for myself, couldn't find one that exists. It basically is an image/file importer from a link. So if you are using an image editor and you choose to open a photo, you can select this app and paste the url and it will download and use it on the editor.

Shows you a random app. That's it, I was bored. You can star for easy access later.

There are few more that I build with a friend at RamenTech.

JSONRPC Server & Client For Python on Google App Engine

Now that google cloud endpoints is around the corner it will and probably should be a standard way of creating web services for any types of clients for mobile, desktop or even your ajax requests. It's still experimental as the time of this writing and I will not really talk about how to use it since their documentation has some good example on it already.

I will be sharing on how and what I've used to create my own web services for android clients I have created and for ajax calls.

I have created my own jsonrpc client/server class for python. My own full implementation of jsonrpc standards. I have included this on my app-engine-starter code with some sample if you run it and click the JSONRPC Demo dropdown. Feel free to use it. It is still a nice simple library to use creating web services.

I will give a quick sample code here on how it's used:


import logging
from google.appengine.ext import webapp, ndb
import jsonrpc


class Calculator():

    def add(self, a, b):
        return a + b

    def subtract(self, a, b):
        return a - b


# Here is the RPC Handler for your calculator
class CalculatorHandler(webapp.RequestHandler):

    def post(self):
        # just pass the class you want to expose
        server = jsonrpc.Server(Calculator())
        # passing request & response handles all necessary headers
        server.handle(self.request, self.response)


# Here is the RPC Client for your calculator
# Demonstrating an async & synchronous way
# Although you wouldn't really wanna use it on same server
# this is just demo purposes. (Not true for ajax calls which is included on app-starter demo)
class CalculatorClientHandler(webapp.RequestHandler):

    def get(self):
        # this is an async rpc client so you don't need to wait for any calls to finish
        # it's also sampled in a blog post about searching google
        # it uses ndb context again so you can batch it with other ndb async calls
        # remember that if the server supports batching, you should make use of that
        # uses for async fetches are helpful on different domain rpc calls
        calc_async = jsonrpc.ClientAsync('http://localhost:8080/rpc/calculator')
        futures = [calc_async.add(i, 1) for i in range(5)]
        # now we solve another async call without waiting for the others
        calc = jsonrpc.Client('http://localhost:8080/rpc/calculator')
        answer = calc.add(1, 2)
        logging.info('We got answer before requests! %s' % answer)
        # now we wait for all to finish
        ndb.Future.wait_all(futures)
        # Then we respond the answer
        return self.response.write('%s %s'  % (answer, [future.get_result() for future in futures]))


app = webapp.WSGIApplication([('/rpc/calculator', CalculatorHandler),
                              ('/calculator', CalculatorClientHandler)],
                             debug=True)

# to make sure all unhandled async task are finished
app = ndb.toplevel(app)

This is specifically designed for google app engine because of the use of ndb context for asynchronous calls for the client. The server should work normally on any other environment. But it shouldn't be hard to change the client to work with a normal tasklet, it's just simple replace of the library that is used for urlfetch. This is helpful so that if you use a lot of async calls with ndb you are taking advantage of its auto batch feature which will try to group all possible requests as small network hop as possible.

Here is a direct link if you just want the jsonrpc.py

An update base on Rober King's suggestion, it would be more convenient to just create a base ApiHandler so that you can easily just extend it and not pass all session variables and anything you setup on a request scope. Here is a way to do it with current jsonrpc module.
class ApiHandler(webapp.RequestHandler):
    # usually this should really be extending your base handler
    def post(self):
        server = jsonrpc.Server(self)
        server.handle(self.request, self.response)

# Now you directly put all your methods in the handler
class CalculatorHandler(ApiHandler):

    def add(self, a, b):
        return a + b

    def subtract(self, a, b):
        return a - b
Tuesday, April 23, 2013

NDB Caching Queries Tips & Best Practice - Google App Engine

Update: Since keys only queries are now free, I would prefer to just cache the queries with only resulting to keys_only=True then retrieving the cached values of it with ndb.get_multi(keys).

If you are creating a heavy read app engine app, that has a lot of listing/query entities it's a good idea to cache those queries so you don't get charged for reads. But you want it to also be up to date and not have to worry about invalidations.

Here is some of the things I've done for caching queries. This can't be applied to all but should work on most and can be implemented on same manner with more complex queries.

The idea is to have an updated field on the fields you are filtering from so you can use that as your cache key.

Here is a sample code that that shows how to display user post with cached queries.


from google.appengine.ext import ndb

class User(ndb.Model):
    created = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
    updated = ndb.DateTimeProperty(auto_now=True, indexed=False)

    email = ndb.StringProperty()
    # It's always good to keep a total of everything if you are displaying it
    total_comments = ndb.IntegerProperty(default=0, indexed=False)


class Comment(ndb.Model):
    created = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
    updated = ndb.DateTimeProperty(auto_now=True, indexed=False)

    user = ndb.KeyProperty(required=True)
    message = ndb.TextProperty()

    @classmethod
    @ndb.transactional(xg=True)
    def post_comment(cls, user, message):        
        user.total_comments += 1
        comment = Comment(user=user.key, message=message)
        ndb.put_multi([user, comment])

    @classmethod
    def get_by_user(cls, user, cursor=None):
        ctx = ndb.get_context()
        # every new comment you add a total and updated field so the cache invalidates instantly
        cache_id = 'get_by_user_%s_%s_%s' % (user.key.urlsafe(), user.updated, cursor)
        cache = ctx.memcache_get(cache_id).get_result()

        if cache:
            result, cursor, more = cache
            # This is your decision if you want to cache keys only
            # it's helpful in cases that you have a single page with that value
            # it means that you cache less and more efficiently
            result = filter(None, ndb.get_multi([r for r in result]))
        else:
            qry = cls.query(cls.user == user.key)

            result, cursor, more = qry.fetch_page(20, start_cursor=ndb.Cursor(urlsafe=cursor) if cursor else None)
            # cache keys only again your decision, you can cache the whole thing if it's not important
            # expiration is not needed if it's this simple
            ctx.memcache_set(cache_id, ([r.key for r in result], cursor, more))

        return result, cursor, more