More on Environment Variables and Local Settings

(See the earlier post)

A little more research after a suggestion from the server admin has confirmed that I can’t see Apache environment variables with os.environ in a WSGI interface. Instead, those are available through the request object. Since I don’t yet have a request object when starting up the app, I had to find a new way to ID the instance.

My server admin suggested that he add a system environment variable that would hold the instance name. I can read those variables just fine. He did, I read it, and all is well. New code:

import os
TIER = os.environ.get('TIER','') ## get the value of the TIER env variable

if fqdn == 'dashdrum_laptop':          ## laptop
    SERVER_ENVIRONMENT = 'Laptop'
elif TIER == 'dev':                ## dev server
    SERVER_ENVIRONMENT = 'DEV'
elif TIER == 'qa':                 ## qa server
    SERVER_ENVIRONMENT = 'QA'
elif TIER == 'prod':               ## production server
    SERVER_ENVIRONMENT = 'PROD'

Note that I’m still using the fqdn to ID the laptop.

Django – Local Settings Revisited

Settings in a Django project will differ between development and production instances. In my case, I develop my projects on my laptop running the built in Django webserver and the sqlite3 database, while the production systems use Apache and MySQL (on another server). Media paths, the DEBUG setting, and other attributes can differ.

After browsing around several web sources, I learned that the prevailing wisdom is to include an import of a local-settings.py file that in turn imports another settings file that contains the installation specific values. All of the settings files are included in version control, except local-settings.py, which is different for each installation. This setup has worked fine with my personal applications as well.

However, I learned this week that the way my central IT office allows access to servers that makes this method unworkable. I have been given easy access to the development server, where I have been able to create and modify the local-settings.py file as needed. What I was told this week is that I will have no access to the quality assurance or production servers, except through a deploy mechanism that copies the entire application directory from the dev server to QA, and from QA to production. This means that I can’t create a unique local-settings.py on each box.

Instead, I need to be able to determine the instance based on values available at run time. My server admin told me of an Apache environment variable called ‘tier’ that he maintains in the Apache configuration, but despite my efforts to find a way to access the variable, I was unsuccessful, If anyone out there can guide me in the right direction, please leave a comment.

One down side to using the Apache variable is that not all of my installations use that web server, so I’d still have to find an alternative (or use a default value) to properly identify other systems.

My next idea was to read the name of the server running the application, and using that to make the decision. The getfqdh() function of Python’s socket module returns the fully qualified domain name, which worked well to identify the instance. Here’s the code (your domain names may vary):

settings.py

## Determine the server environment.

import socket

fqdn = socket.getfqdn() ## Get the fully qualified domain name

SERVER_ENVIRONMENT = 'UNKNOWN'

if fqdn == 'dashdrum_laptop':          ## laptop
    SERVER_ENVIRONMENT = 'Laptop'
elif fqdn == 'dev.example.com':       ## dev server
    SERVER_ENVIRONMENT = 'DEV'
elif fqdn == 'qa.example.com':         ## qa server
    SERVER_ENVIRONMENT = 'QA'
elif fqdn == 'example.com':            ## production server
    SERVER_ENVIRONMENT = 'PROD'

A potential problem with this method is that the server name may change over time. More on this here.

Once I know the environment, I can import the proper settings file thusly:

settings.py

## Get server specific settings

try:
    if SERVER_ENVIRONMENT == 'Laptop':
        from laptop_settings import *
    elif SERVER_ENVIRONMENT == 'DEV':
        from dev_settings import *
    elif SERVER_ENVIRONMENT == 'QA':
        from qa_settings import *
    elif SERVER_ENVIRONMENT == 'PROD':
        from prod_settings import *
except ImportError:
    pass

Assuming that I have the correct domain names and the correct settings for each instance, I should be good to go in each environment.

Using CAS with Django – an Update

As I move closer to deploying my application in production, I have been rethinking my implementation of CAS in Django, as outlined in this earlier post,Using CAS with Django. Nothing major, but I don’t like the idea of commenting or uncommenting the code based on the server being used. Here’s what I came up with:

First, I added a new variable to settings.py called USE_CAS. This is set to False by default. Following that, I have some code that determines the server (out of scope for this post) and changes the variable to True if needed.

settings.py

USE_CAS = False

## Check for Production server - the real code is much better than this
if SERVER_ENVIRONMENT == 'Prod':
    USE_CAS = True

Later, I can use that variable to decide whether to execute the CAS specific portions of code:

settings.py

## django_cas settings

if USE_CAS:
    CAS_SERVER_URL = 'https://www.example.com/apps/account/cas/' 
    CAS_VERSION = '2'
    
    AUTHENTICATION_BACKENDS = (
        'django.contrib.auth.backends.ModelBackend',
        'django_cas.backends.CASBackend',
    )
    
    MIDDLEWARE_CLASSES += (
        'django_cas.middleware.CASMiddleware',
    )

## end django_cas settings
urls.py

# django_cas
if settings.USE_CAS:
    urlpatterns = patterns('',
    url(r'^accounts/login/$', 'django_cas.views.login',name='login'),
    url(r'^accounts/logout/$', 'django_cas.views.logout',name='logout'),
                           ) + urlpatterns
                           
# end django_cas

Now, to properly direct the logout link, I’m taking a different approach. My previous example showed how I changed the links on the top right of the page to point to the proper logout link by replacing that section of the admin template in admin/base_site.html. I probably could have passed in the USE_CAS variable in the context and then selectively modified the template, but I instead wanted to simplify the process, not make it more difficult. After searching around a little, I found an unrelated post about using redirects to handle legacy URLs. This worked great, as I was able to redirect both the logout and change password links to the proper destination. Note that the change password functionality is not part of my project, but is provided by the central IT organization that also provides the CAS services.

urls.py

# django_cas
if settings.USE_CAS:
    urlpatterns = patterns('',
    ('^admin/logout/$', 'django.views.generic.simple.redirect_to', 
            {'url': '../../accounts/logout'}),
    ('^admin/password_change/$', 'django.views.generic.simple.redirect_to', 
            {'url': 'https://www.example.com/apps/account/ChangePassword'}),
    url(r'^accounts/login/$', 'django_cas.views.login',name='login'),
    url(r'^accounts/logout/$', 'django_cas.views.logout',name='logout'),
                           ) + urlpatterns
                           
# end django_cas

Pretty slick, don’t you think? Now, I have an easy to maintain setup that will work in any of my development and production environments. Plus, I cut the number of code files affected from 3 to 2, and I have no more commenting to worry about.

I would love to hear how others have solved this or a similar issue. Please leave your comments below.

Backing Up my Online Persona

I am a heavy user of online services. Social networks, photo sharing, bookmarking, RSS reader, email, etc. – all of these tools help me organize and communicate. However, I think we all are a little too trusting of the providers of these services. Also, most seem to provide many ways to get data in, without offering ways to get the data back out – either for backup or for portability. In this post, and probably a few more to come, I’ll discuss what I have done, and what I need to do, to better secure my data and to become more independent of any one service.

Google Apps Standard Edition

I’ve setup a Google Apps domain for my family as an easy way to keep email, calendars, and other data independent from the domain of my ISP. Therefore, the Google cloud is storing much of our information online. Luckily, Google also makes it pretty easy to pull data out in a usable format.

Email

Here’s one system where I’m ahead of the curve. About two years ago, I setup a job on my home server that downloads all of my email messages via the provided POP3 interface. Running once each day, all of the new traffic is stored locally. Furthermore, this file is included in the server backup each night. I probably wouldn’t be easy to import into a new email service, but it certainly is possible.

Note: This is the only backup I currently have automated.

Contacts

Contacts ExportGoogle offers an easy export of contact information into several useful formats. I have used this on a couple of occasions without incident.

Tasks

I don’t use the tasks feature of GMail very often, which is probably good since I haven’t yet found a way to backup the data.

Calendar

I use the Google calendar service, but my family doesn’t seem to share my enthusiasm for the platform. So, it’s mostly me reminding myself of events. I haven’t yet found a way to backup the calendar entries, probably because I haven’t researched it yet. However, I have used the iCalendar feeds to include data from other calendars, and it is possible that the feed could provide at least a partial backup.

Docs

There are many articles on the Internet describing how to backup documents from Google Docs. I need to read some of them and get this backup going.

Sites

Haven’t used these at all.

Other Google Services

Google Reader

Reader is my preferred choice for processing RSS feeds. One of those reasons is that it provides a simple link to export the feed information into an OPML file, which could then be imported into another account or service very easily. Using the nested outline feature of the OPML format, the tags assigned to the feeds are included. I haven’t yet seen a way to backup starred or shared items, but this may not be that important.

Google Voice

This shares the contacts list with my Google account, so that backup is easy. However, I haven’t used the service long enough to see what other backups are available or if they are necessary. Stay tuned.

Blogger

Both my wife and I run blogs hosted by Blogger (although they have fallen into neglect of late). There are many published ways to backup the blog content into an XML format – not sure how included images are handled. An easier way that I have used in the past which works well with our sporadic posting schedules is to use a save to HTML function from a browser on the pages that group posts by month. Saving in that format is similar to making a photocopy of a written journal, and is probably sufficient for our needs.

Flickr

Flickr is probably the tool I use the most online. However, much research is required before I can write in an informed fashion on backing up Flickr data. I have seen utilities that will backup/download photos from the service, but I already have the master copies on my home server. What I’m interested in are the tags, comments, groups, sets, and contacts. I wonder what I’ll find.

Facebook

Facebook is all about friends. I think that most of the content is pretty disposable, but the connections are not. Is there a way to export or backup Facebook data? Not that I know of.

Twitter

From my point of view, Twitter and Facebook have very similar backup needs. However, in both cases, the followers and followees (friends) only make sense within the service. How could I export a relationship from Twitter and move it to something like FriendFeed?

Just as with Facebook, I see no reason to backup the tweets I send or those I receive.

Delicious

I backup my Delicious bookmarks fairly often, because I’m always afraid that it is about to be shut off. How can they be making any money? Unfortunately, the export comes out in HTML format, and doesn’t include the tag information. Tags are the most useful feature of the service, and I hope I can find a way to maintain the tag info.

In future posts, I will chronicle my attempts to backup my online data, and to do so in an automated fashion whenever possible.

Using CAS with Django

Day 1

Today I am finally ready to begin my experimentation using Central Authentication Service with my Django apps. Making it much easier for me, I’m using a package I found in Google Code called django_cas to interface with the University’s CAS service.

Step 1:

Request access from the Security office. This includes providing my development machine IP for them to add to the valid access list, and a request for a couple of test accounts so that I can work with users of different access levels in the application.

Step 2:

Install django_cas 2.0.2 on my dev machine.

Step 3:

Make modifications to the code to use the package. These are easy changes – a couple of additions to the settings.py file for middleware and authentication backend, and adding 2 URL patterns.

As soon as I hear back from Security, I’ll try it out.

Day 2

My Security guy cleared my IP address to access the CAS server, and guess what?  It worked on the first try!

Interesting note:  When I had a user who had not been part of the user list attempt to log in, his username was added with no priviledges.  Not really a problem, but the list could get long if many people try to log in.

Also interesting, the guidance I received from my security guy mentioned three available links, /login, /logout, and /serviceValidate.  However, django_cas uses /proxyValidate instead of /serviceValidate.  I figured I’d have to monkey patch the code to make it work, but the CAS service I’m using seems to work fine with /proxyValidate.

UPDATE: I’ve been testing with CAS 3.3.2, as central IT is planning an update next month, and I’ve found that the /proxyValidate URL no longer works. I imagine that they are no longer supporting it. A quick change to the django-cas code to use /serviceValidate solved the problem, but I hate patching outside code, as I will have to redo the change each time I upgrade (assuming I remember to do so). Perhaps the security office can enlighten me as to the difference between the two methods, as I do not understand the subtleties. Also, I wonder if I should notify the mantainers of django-cas. They probably have different needs that I do, and /proxyValidate works fine for them.END UPDATE

More experimentation tomorrow.

Day 3

As I’m playing around with this, I’m finding a couple of issues to work on.  

First, on the Django Admin pages, there are links for Change Password and Logout User that link to the internal Django functions and not the CAS services.  Luckily, the Django creators have included these links in a template block, so it was easy to modify the admin/base_site.html template to offer the correct link to log off.  I dropped the change password functionality since that is handled elsewere on a university-wide basis.

Since the CAS service only allows specificed IP addresses to access it, I will have to remove django_cas from my apps when working on the laptop.  To do this I need to 1) comment the middleware declaration, 2) comment the authentication backend declaration (both in settings.py), 3) switch the URLs for login and logout back to the defaults, and 4) comment the logout link on the admin pages.  Not too tough.  I can do the settings stuff via local_settings.py.

I need to decide if I want the user to be logged out from CAS when they log out of the app.  The default is to do both.

Day 4

UPDATE: I’ve changed the way this was implemented. For the latest, see this post – Using CAS with Django – an Update.

OK, I think I’m done playing around and ready to install this functionality in a real application.  There are just three files I have to touch.

urls.py:

# django_cas
(r'^accounts/login/$', 'django_cas.views.login'),
(r'^accounts/logout/$', 'django_cas.views.logout'),
# end django_cas

Any existing URLS with these same patterns should be commented.

settings.py

## django_cas settings

CAS_SERVER_URL = 'https://www.example.com/apps/account/cas/'
CAS_VERSION = '2'

AUTHENTICATION_BACKENDS = (
    'django.contrib.auth.backends.ModelBackend',
    'django_cas.backends.CASBackend',
)

MIDDLEWARE_CLASSES += (
    'django_cas.middleware.CASMiddleware',
)

## end django_cas settings

templates/admin/base_site.html
(copy this file from the django.contrib.admin library if you haven’t already created one)

{% block userlinks %}
<a href="/accounts/logout">Logout</a>
{% endblock userlinks %}

That’s all it seems to take. Happy authenticating!

ANOTHER UPDATE:
I added the code from the django-cas page that sets up a custom 403 error page. No issues.

Custom Admin Templates in Django

(Those who are experienced Django developers, or even anyone beyond beginner stage, will find the follow points obvious, but I’m documenting here for my future reference.)

I’ve been experimenting with custom admin templates in Django, in preparation for a project I’m working on. I quickly found some help on the subject, mainly in the Django book. Chapter 6 lays it out pretty well:

As we explained in Chapter 4, the TEMPLATE_DIRS setting specifies a list of directories to check when loading Django templates. To customize Django’s admin templates, simply copy the relevant stock admin template from the Django distribution into your one of the directories pointed-to by TEMPLATE_DIRS.

The admin site finds the “Django administration” header by looking for the template admin/base_site.html. By default, this template lives in the Django admin template directory, django/contrib/admin/templates, which you can find by looking in your Python site-packages directory, or wherever Django was installed. To customize this base_site.html template, copy that template into an admin subdirectory of whichever directory you’re using in TEMPLATE_DIRS. For example, if your TEMPLATE_DIRS includes “/home/mytemplates”, then copy django/contrib/admin/templates/admin/base_site.html to /home/mytemplates/admin/base_site.html. Don’t forget that admin subdirectory.

Then, just edit the new admin/base_site.html file to replace the generic Django text with your own site’s name as you see fit.

Note that any of Django’s default admin templates can be overridden. To override a template, just do the same thing you did with base_site.html: copy it from the default directory into your custom directory and make changes to the copy.

Here’s where my beginner status gets in the way. I didn’t know where in the directory structure to put the custom template. After a little playing around, I figured out that it should be in the directory pointed to by my TEMPLATE_DIRS setting in settings.py. Mine reads:

TEMPLATE_DIRS = (
os.path.join(os.path.dirname(__file__), ‘templates’).replace (‘\\’,’/’),
)

So, with my project called ‘mysite’ and application called ‘books’, the custom change template for the publisher entity lands in /mysite/books/templates/admin/books/publisher/. Kind of long, but keeps the logic encapsulated with the application.

Next, I wanted to try changing the look of the admin screens. These templates have to be located in a templates directory found in the project directory. This is a little frustrating for me, since I’d like to keep these changes with the application. However, that’s not how Django works, so I’ll just go with it.

(Unrelated note, I tried out my app using lynx as the browser, and found it works pretty well. I think that as long as I strive for full lynx compatibility, I’ll have better luck working with different browsers.)

This post originally appeared on the Linux Server Diary.

Getting Things Done to GTD (Jabber with Google Apps)

This is a complicated process I followed to try to make things easier. It started with an article discussing how to Make Gmail Your Gateway to the Web. Basically, he is trying to make his GMail account his gateway to everything. I’ve got my Google Apps account all setup to received email from every account with filters and tags and alternate accounts. The calendars are shared with the rest of the family (if I could only get everyone else to use them).

The only thing he’s done that I haven’t is what he calls “update and track your social networks via IM”. So, I setup the ping.fm and notify.me accounts as he describes, and tried it out. It all worked pretty well except I coulnd’t get the notify.me account to validate GTalk.

A little Google research and I found that I have to add 10 SRV entries in my DNS for the domain to property route the jabber messages to Google. This Google article explains it pretty well. Next, I had to figure out how to enter this info into a Dreamhost account. I found that the correct method is to enter “_xmpp-server._tcp” in the name field and “5 0 5269 xmpp-server.l.google.com.” in the value field (be sure to include the period at the end). After a little time for the DNS to get settled, I tried the validate process again, and it worked great.

OK, so after all of that, let’s try it out.

Ping.fm works exactly as advertised. I setup micro-blog messages to go to Twitter, and status updates to both Twitter and Facebook. It all works via the chat client in GMail.

Notify.me also did what I expected. I couldn’t find a way to get my entire Twitter stream to come through, but the messages sent to me came through fine. (Still working on direct messages)

However, there is a problem. In the IM that comes in from notify.me, I can’t tell who sent the message. There isn’t any setup of the format that I can see, but it wasn’t there. I posted a suggestion message to the service.

We’ll see how long I keep this setup going.


This post originally appeared on the Linux Server Diary.