Posts

Showing posts from December, 2012

How to Build Live Wallpaper with Canvas on Android

This will be a very basic canvas live wallpaper with android. Just to get people started. Just read the comments on the code to understand how everything works. AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="org.altlimit.samplelivewallpaper" android:versionCode="1" android:versionName="1.0"> <uses-sdk android:minSdkVersion="15"/> <application android:label="@string/app_name" android:icon="@drawable/ic_launcher"> <activity android:name="MyActivity" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/>

Join Query on Google App Engine Datastore

App Engine Datastore is a no sql database. That means you cannot  do standard sql queries. But they do have the basic queries using GQL. It is a no SQL database which is very reliable, does not slow down even with terabytes of data, and has a nice indexing mechanism to fetch data. You can use Google Cloud SQL if you need a relational database, it's a manage mySQL database by google. By the time I'm writing this, they now support up to 100GB of mySQL database, but still limited to the limitations of a standard mySQL and it's coolness. So if you really can't avoid a de-normalize table like you want to show the name of a user in a list, I'll show you samples. This will all be using python 2.7 and ndb. from google.appengine.ext import ndb # just a sample model for how to efficiently join them class User(ndb.Model): name = ndb.StringProperty() photo = ndb.BlobKeyProperty() """ always create a text version of your blobkey and store

Web Scraping with Google App Engine

Here is a quick tutorial on how you can scrape google search results asynchronously with app engine and caching its result in memcache. You should not use this directly because you can get blocked by google, this is just a sample for you on scraping web pages, feeds, xml, etc. But if you do want to do something like this, I recommend adding delays, and act more like a human on your scrapes. But I believe that is against their TOS. I added the use of async here for people who don't know how to use them yet so they can learn in the process. The code below is a complete working google search scraper, read the code comments to understand everything. This is all done with python 2.7 with ndb app.yaml application: your-application-id version: 1 runtime: python27 api_version: 1 threadsafe: true handlers: - url: /.* script: main.app libraries: - name: lxml version: latest main.py import urllib from urlparse import urlparse, parse_qs from google.appengine.ext import webapp,