Thursday, April 25, 2013

Summary of my Android Apps

After checking the archive list of my blog I didn't see me sharing my android apps on here. So here it is, I currently have 4 active apps mostly created for myself that I published on google play.

AppLauncher+
This app automatically organize your apps base on google play categories. Reason I built is cause I flash my firmware a lot at one time that reorganizing folders of my apps just takes too much time. I couldn't find one that is simple enough that would just work and I won't ever touch it again. It has now evolve to have features like:

  • Manual Categorization (had to do it cause of too much demand)
  • Floating launcher (for paid, you can open a folder/assign commands on what it does)
  • Create Shortcut & Folder view on those shortcut (also paid only)
  • Free version basically just gets an organized list with ads! :(

This is a live wallpaper, you can select a static wallpaper then then it will have your borders as status bar. I did this because I thought it was cool. It really wasn't from the current population. Ohh well I still use it. It can now show the status bar anywhere and features like random wallpaper and wallpaper changing depending on your battery level.

Another app that I use for myself, couldn't find one that exists. It basically is an image/file importer from a link. So if you are using an image editor and you choose to open a photo, you can select this app and paste the url and it will download and use it on the editor.

Shows you a random app. That's it, I was bored. You can star for easy access later.

There are few more that I build with a friend at RamenTech.

JSONRPC Server & Client For Python on Google App Engine

Now that google cloud endpoints is around the corner it will and probably should be a standard way of creating web services for any types of clients for mobile, desktop or even your ajax requests. It's still experimental as the time of this writing and I will not really talk about how to use it since their documentation has some good example on it already.

I will be sharing on how and what I've used to create my own web services for android clients I have created and for ajax calls.

I have created my own jsonrpc client/server class for python. My own full implementation of jsonrpc standards. I have included this on my app-engine-starter code with some sample if you run it and click the JSONRPC Demo dropdown. Feel free to use it. It is still a nice simple library to use creating web services.

I will give a quick sample code here on how it's used:


import logging
from google.appengine.ext import webapp, ndb
import jsonrpc


class Calculator():

    def add(self, a, b):
        return a + b

    def subtract(self, a, b):
        return a - b


# Here is the RPC Handler for your calculator
class CalculatorHandler(webapp.RequestHandler):

    def post(self):
        # just pass the class you want to expose
        server = jsonrpc.Server(Calculator())
        # passing request & response handles all necessary headers
        server.handle(self.request, self.response)


# Here is the RPC Client for your calculator
# Demonstrating an async & synchronous way
# Although you wouldn't really wanna use it on same server
# this is just demo purposes. (Not true for ajax calls which is included on app-starter demo)
class CalculatorClientHandler(webapp.RequestHandler):

    def get(self):
        # this is an async rpc client so you don't need to wait for any calls to finish
        # it's also sampled in a blog post about searching google
        # it uses ndb context again so you can batch it with other ndb async calls
        # remember that if the server supports batching, you should make use of that
        # uses for async fetches are helpful on different domain rpc calls
        calc_async = jsonrpc.ClientAsync('http://localhost:8080/rpc/calculator')
        futures = [calc_async.add(i, 1) for i in range(5)]
        # now we solve another async call without waiting for the others
        calc = jsonrpc.Client('http://localhost:8080/rpc/calculator')
        answer = calc.add(1, 2)
        logging.info('We got answer before requests! %s' % answer)
        # now we wait for all to finish
        ndb.Future.wait_all(futures)
        # Then we respond the answer
        return self.response.write('%s %s'  % (answer, [future.get_result() for future in futures]))


app = webapp.WSGIApplication([('/rpc/calculator', CalculatorHandler),
                              ('/calculator', CalculatorClientHandler)],
                             debug=True)

# to make sure all unhandled async task are finished
app = ndb.toplevel(app)

This is specifically designed for google app engine because of the use of ndb context for asynchronous calls for the client. The server should work normally on any other environment. But it shouldn't be hard to change the client to work with a normal tasklet, it's just simple replace of the library that is used for urlfetch. This is helpful so that if you use a lot of async calls with ndb you are taking advantage of its auto batch feature which will try to group all possible requests as small network hop as possible.

Here is a direct link if you just want the jsonrpc.py

An update base on Rober King's suggestion, it would be more convenient to just create a base ApiHandler so that you can easily just extend it and not pass all session variables and anything you setup on a request scope. Here is a way to do it with current jsonrpc module.
class ApiHandler(webapp.RequestHandler):
    # usually this should really be extending your base handler
    def post(self):
        server = jsonrpc.Server(self)
        server.handle(self.request, self.response)

# Now you directly put all your methods in the handler
class CalculatorHandler(ApiHandler):

    def add(self, a, b):
        return a + b

    def subtract(self, a, b):
        return a - b
Tuesday, April 23, 2013

NDB Caching Queries Tips & Best Practice - Google App Engine

Update: Since keys only queries are now free, I would prefer to just cache the queries with only resulting to keys_only=True then retrieving the cached values of it with ndb.get_multi(keys).

If you are creating a heavy read app engine app, that has a lot of listing/query entities it's a good idea to cache those queries so you don't get charged for reads. But you want it to also be up to date and not have to worry about invalidations.

Here is some of the things I've done for caching queries. This can't be applied to all but should work on most and can be implemented on same manner with more complex queries.

The idea is to have an updated field on the fields you are filtering from so you can use that as your cache key.

Here is a sample code that that shows how to display user post with cached queries.


from google.appengine.ext import ndb

class User(ndb.Model):
    created = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
    updated = ndb.DateTimeProperty(auto_now=True, indexed=False)

    email = ndb.StringProperty()
    # It's always good to keep a total of everything if you are displaying it
    total_comments = ndb.IntegerProperty(default=0, indexed=False)


class Comment(ndb.Model):
    created = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
    updated = ndb.DateTimeProperty(auto_now=True, indexed=False)

    user = ndb.KeyProperty(required=True)
    message = ndb.TextProperty()

    @classmethod
    @ndb.transactional(xg=True)
    def post_comment(cls, user, message):        
        user.total_comments += 1
        comment = Comment(user=user.key, message=message)
        ndb.put_multi([user, comment])

    @classmethod
    def get_by_user(cls, user, cursor=None):
        ctx = ndb.get_context()
        # every new comment you add a total and updated field so the cache invalidates instantly
        cache_id = 'get_by_user_%s_%s_%s' % (user.key.urlsafe(), user.updated, cursor)
        cache = ctx.memcache_get(cache_id).get_result()

        if cache:
            result, cursor, more = cache
            # This is your decision if you want to cache keys only
            # it's helpful in cases that you have a single page with that value
            # it means that you cache less and more efficiently
            result = filter(None, ndb.get_multi([r for r in result]))
        else:
            qry = cls.query(cls.user == user.key)

            result, cursor, more = qry.fetch_page(20, start_cursor=ndb.Cursor(urlsafe=cursor) if cursor else None)
            # cache keys only again your decision, you can cache the whole thing if it's not important
            # expiration is not needed if it's this simple
            ctx.memcache_set(cache_id, ([r.key for r in result], cursor, more))

        return result, cursor, more