Morning fog hits San Francisco

Photo from December 7, 2005

I drove before sunrise from the South Bay to San Francisco this early morning in order to accomplish two things. One was to photograph a Christmas card to send to friends because of my newly-launched, just finished eCards for Plaxo. The other was just to see the legendary view from the east peak of Mt. Tam.

On the drive back, I pulled over to the side of the road and took some handheld photos of which this is one. I liked this angle because from here you can see both the skyline and the Golden Gate Bridge in the frame, while still having enough foreground to show the distance and frame the photograph.

Morning fog hits San Francisco
Mt Tamalpais, Marin, California

Nikon D70, Nikkor 70-200mm f/2.8G VR
3 exposures @ f/13, iso 800, 70mm (105mm)

While I did make a christmas card, I never processed the other photos until this project came up on my Aperture to Lightroom migration list.

Continue reading about this photo after the jump

Granny G’s Burger at the Boxing Room

Photo from July 27, 2013

Granny G’s burger with egg
Boxing Room, Hayes Valley, San Francisco, California

Apple iPhone 5
1/20sec @ f/2.4, iso1600 4.13mm (33 mm)

This was after going to the symphony. I guess I was hungry because I asked for a fried egg to be put on top of it.

Since this project came up and only had iPhone images, I thought I’d use a non-Camera+ iPhone image as as an opportunity to investigate Lightroom CC’s de-noise and sharpening routines (in the Detail tab of the Develop module). While it is very convenient, I dislike the artifacts it generates when working on underexposed JPEG images.

Surprisingly, a simple application of basic processing, seems to oversaturate the reds in image, which I had to pull down using the HSL controls. I guess Adobe engineers are Canon photographers.

In Lightroom’s defense, the lighting was terrible, so I should be happy anything was usable, as there’s only so much you can recover from a high ISO photo camera shot. Should learn to bring a real camera out when I eat.

I kissed a girl

Photos from December 15, 2007

The end of my much-beloved Aperture and the start of a new year means a migration to Adobe Lightroom CC is in order. The Python developer who coded the Aperture import plugin for Adobe was clearly underpaid as it is underperforming and crash-prone when you have an Aperture Library as corrupted as mine.

So after a week of failure upon hard disk failure, upon Aperture Vault recovery, upon backups and more backups (lesson learned). I’ve resigned myself to moving one project at a time into Lightroom. But which project?

For that, I wrote a simple Applescript that selects a random project. And then I move it, verify the map, redo the face detection, and fix the keywords. As a reward, I process and post an image from it and hopefully write a little something. You’ve noticed a few over the last week, and this will continue for…

(If I manage a project a day, it’ll be a couple years before I’ve fully migrated. Such is what happens when you’ve been shooting digitally for over 16 years.)

Jonathan Abrams kindly invited me to the christmas party of his startup at a bar he is the co-owner of. For some reason both my main camera (Nikon D200) and event photography lens were broke at the time. I think I was attending so many events and doing so much traveling, I was extremely hard on my equipment.

That day I dug up my old Nikon D70, and my landscape photography lens, try to put the biggest flash diffuser I could find, and started shooting anyway. I really tried to push the camera and lens for all it was worth. Slide is a really great venue, but pre-D3 ISO range and a small aperture lens can’t really do it justice. Oh well, just focus on the subjects lit by the flash and ignore the rest, because who can see anything else?

I kissed a girl
Slide, Union Square, San Francisco, California

Nikon D70, Nikkor 12-24mm f/4G
1/20 sec @ f/4, iso 1250, 12mm (18mm)

Continue reading about this photo after the jump

Grabbing “tea”

Me: Does your wife drive a light blue SUV?

M—: Yeah.

Me: I think I saw her crossing Geary, but some other dude was in the passenger side, so I didn’t want to give her up in case she was cheating on you. Lol!

M—: But then you conscience started to nag? 🙂

Me: No. It occurred to me you probably have an open relationship. 😀

Me: I think it was a coworker. Maybe they were going to lunch?

M—: Ah yes, I think she went to some boba tea place on Geary.

Me: Makes sense. The old ice cream place and the Thai restaurant both switched to tea places. She needs to take a different route there or I’ll basically become your informant: “Hey, I saw your wife grabbing ‘tea’ again!”

M—: It’s all good, I constantly stalk her with the Find Friends app — zero trust.

Me: Lol! I forgot about that. Need to remember to turn that shit off when I start cheating. I wonder if there is a service where I can pay someone to walk around with my iPhone when I’m getting it on with my mistress.

Me: New startup idea!

M—: Isn’t that what TaskRabbit is for?

Me: Yeah, but it needs in-app purchasing under some innocuous name so the S.O. doesn’t notice it in my accounting software. “Honey why do you keep getting those Costco packs of iTunes gift cards?”

Fresh Fruit Cup

Photos from October 17, 2010.

Fresh Fruit Cup (organic)
Fresh Fruit Cup (organic)
Beach Street Grill, Fisherman’s Wharf, San Francisco, California

Leica M8, Cosina-Voigtländer NOKTON 35mm F1.2 Aspherical
Lightroom (crop, mask, basic, detail, effects)
1/80sec, iso 160, 35mm (46mm)

Marie and I don’t often get to Fisherman’s Wharf since I moved away from there, but since her sister was visiting, we decided to make the drive for breakfast. The nice thing about the Wharf is that the better food places aren’t busy because they aren’t frequented by tourists who are looking for anything labeled as “world famous.”

Because I was one of the first people to post pictures on Yelp, the owners recognize us and sometimes give us a fruit cup while we are waiting for our order. That’s another opportunity to photograph.

As I’ve mentioned before, one of the interesting things about shooting with a Leica camera is its limitations. A close rangefinder focusing of 70cm means puts more in than the food in frame showing a bit of the environment the food lives in… even if it only appears as bokeh.

Here is the same fruit cup shot on an iPhone using Camera+

As for processing, mostly I spent the time familiarizing myself with Lightroom’s built-ins. I still think in Aperture (and external plugins), but I’m trying to discover how much I can do things in my preferred style in Lightroom. I masked away some of the background saturation, brightness, and detail, though since I’m not yet familiar with shortcuts, my masking leaves a little to be desired. It’s odd because the more you process an image, the less you can tell it was photographed with a Leica. Lightroom’s film grain effect, while not as good as DxO or nik, is a great convenience when viewed close up or printed.

Pounce in shadow

Photo from May 3, 2006.

Pounce in shadow
Pounce in shadow
Riverstone Townhomes, Sunnyvale, California

Panasonic Lumix DMC-LX1
Adobe Lightroom
0.6sec @ ƒ2.8, iso 80, 6.3 (28mm)

I forgot how many things I (still) own from the new apartment: the bicycles and bike rack, my mom’s paintings from Japan, the Sharp Aquos LCD TV set is in now in the bedroom, the component rack in my dad’s house, the Plexo light in storage, and the speakers are in a pile of stuff to go to Goodwill (the DVI cable and eating tray were already given away). The strange cabling was because I snaked an extra-long DVI cable down from my girlfriend’s PowerMac G5 upstairs so she could show wedding video montages to her clients.

This was my first time using my just-purchased Panasonic Lumix camera in a very low light situation. I may have pushed the optical image stabilization a bit too far, surprisingly, even though it is at the native ISO, the RAW file has a lot of noise by today’s standards. Yet back then, when I felt the delay of the exposure, I was shocked that I had a usable image at all. And 28mm and 16:9…oh, that sweet, sweet wide angle!

I miss my cat.

Cameras run in the family

Taikyue Ree somewhere over the Pacific (~1939)

This is a photo of my grandfather. This may have been taken around 1939 somewhere over the Pacific when he came as a postdoctoral student to study at Princeton University.

As you can see by what he is holding, photography runs in the family. 😉

A student of my grandfather (and my mom) managed to get a hold of an old family album sometime after his death in 1992. Because Korea is making a postage stamp of him, he digitized, posted the images for the steering committee, whereupon my uncle sent them to me. It is kind of crazy what sort of memories flipping through these scans bring to me, I can’t imagine how it makes my uncle feel.

Why “every” developer should be using ansible

AnsibleFest starts tomorrow so I thought I’d make a case why you should be using ansible.

First, in order to prevent a ton of hate-comments which usually follow such pronouncements, I should define who you are.

  • You are a developer focused on the web or some web-based tool.
  • You are not already very proficient in operations (e.g. in DevOps).
  • You are responsible in some degree for a system that is not your own (e.g. live server upkeep, deploys, other development or testing machines)
  • You are not in an organization large enough to already have operations/system administration as a distinct and separate organizational unit.
  • You are working on a project or startup that has the potential to grow beyond its current state.
  • You are not currently coding the project/startup mostly in Ruby.
If the above are **all** true for you, then I recommend you should learn and use ansible.

What is ansible?

According to Wikipedia, Ansible is a free software platform for configuring and managing computers.

As a developer, this statement means nothing to me. Here is how I saw all development:

  1. Develop.
  2. ???
  3. Profit!

But when I stop to think about the process in the real world, it went something like this:

  1. Join some startup.
  2. Make friends with someone in operations and have them hook me up with a development environment.
  3. Develop (step 1 before).
  4. Commit code.
  5. Convince someone to deploy my code or make a command that I can type to deploy my code.
  6. Go to step 3.

If step 2 or 5 ever were blockers, someone got bitched out until 2 or 5 ceased to be blockers.

But in the last nine years, startups no longer can necessarily afford resources to have the people responsible those steps be someone distinct from you, the developer. In fact, being a sysadmin is often a hat you as a developer who “knows that computer stuff” has to wear at an organization.

So putting on your “operations” hat, here is the process of web development from scratch:

  1. Get a working development machine.
  2. Do a system update on that machine.
  3. Get the software you need running on your computer.
  4. Get the software configured so that you have a working directory where you’ll start your project.
  5. Link that working directory with your development environment
  6. Commit the working directory to a code revision system.
  7. Start developing.
  8. Commit Code
  9. Repeat 1 on a live machine
  10. Repeat 2 on a live machine
  11. Repeat 3 on a live machine
  12. Repeat 4 on a live machine
  13. Purchase domain names.
  14. Point domain names to live machine.
  15. Deploy committed code through some process.
  16. Repeat step 6, 7, and 15 as needed.
  17. When on boarding someone repeat steps 1-5 as needed for them.
  18. Make any changes on both systems as needed.

So what does ansible do when it says “for configuring and managing computers?” It is software that you code to automate steps 2, 3, 4, 9, 10, 11, 12, 14, and 15 for you as well as help you do step 18. This covers everything that you don’t already know how to do except step 1 and 5.

Other choices

Of course, Ansible is nothing special. There are a lot of other choices out there for configuring and managing computers. As a class those choices are known as configuration management software. What I’ve hoped you learned is that whatever does configuration management can also do machine creation in cloud environments, deploys, and simple ad hoc changes.

Ansible is just one such choice.

So what are your other choices?

In order of decreasing popularity they are:

  1. Yourself, by hand, writing shell scripts as needed for automation.
  2. Puppet
  3. Chef
  4. (Ansible)
  5. Salt
  6. …others…

Why not yourself, by hand?

This is the default choice. It’s what most people are doing right now, every day, and it’s important to remember this because the argument against hand-crafting your system administration is not an argument for ansible, but rather an argument for any configuration manager at all.

The biggest advantage of doing things by hand is that any other option has a sunk cost. Anything you would do in any configuration management system requires you have done this at some level first. This is a huge argument for the default and should not be discounted.

Learning and using a configuration manager may justify its sunk cost at some point. That point depends on a number of outcomes unrelated to the choice of configuration managers.

If the project you are working on becomes successful then you will be hiring more developers (who need virtual or cloud machines, or development environment setup), deploying on more machines, or adding more steps to your process such as testing, or blue-green deploys.

For each of those machines the work of configuration management is repeated, after a set number of repetitions, you would probably start automating things through ad hoc shell scripts, which you’ve also already done for deployments. At some point that work becomes much greater than the savings generated by not bothering to learn a configuration manager.

Often before that point, your organization may become big enough to warrant a person (later a team) to specialize in operations, system administration, or DevOps. In addition, your organization has already been paying extra money in the form of some backup service because of snowflake servers. So while you personally wouldn’t have to directly pay the debt accrued by avoiding using a configuration management system, your organization will have in both backup services and work for your ops team to port an undocumented live infrastructure. A configuration management system avoids this cost because it acts as both a backup-restore system and a complete and consistent set of documentation of your machines’ states.

This is true even if the operations team chooses a configuration management system that’s different from the one you chose, though obviously if they’re the same, that’s even better.

Even if the project fails it is still possible to have positive returns to using a configuration management system.

  • A configuration management is not tied to your cloud provider. So you are free to switch cloud providers from say Amazon Web Services to DigitalOcean with minimal hassle.
  • Developer environment setup is manageable using the same system. At some point you may have gotten a return here even if you are the sole developer, if you are rebuilding your environment from scratch enough times. These scripts can be made identical to your production provision and deploy scripts.
  • Deployment and testing can be automated through configuration management. So there is some return from avoiding writing custom scripts for this process.
  • This is a skill needed in any internet-based startup. You have a skill-transfer directly to your next (hopefully successful this time) project.
  • You may find configuration management interesting and find a career as an expert in DevOps as a specialty instead of as a general developer.

Why not something other than Ansible?

I’ve built out a simple LAMP stack in Puppet Apply, Chef Solo, and Ansible. Overall, I don’t find that large a difference between them for the things I’d need to do at this scale.

When deciding which configuration management system to chose as a developer, I weighed two criteria:

  1. I wanted the learning curve (necessary to be proficient enough to replace direct command-line operations on a machine) to be as small as possible.
  2. I wanted to mitigate the amount of lock-in created on a future operations team by this choice.

Puppet, being the oldest and most popular of the four, has the advantage of being the most likely to already be known by an experienced future DevOps hire in most scenarios.

However Puppet’s execution model is derived via dependencies. While I understand the reasoning, this adds a lot to the learning curve because manifests and modules are not like typing in the command line. Those who say that dependency-based management is easier to learn and use than sequential-ordering are the same idiots that a decade and a half ago told me SAX parsers were more intuitive than DOM ones: go fuck yourself. Furthermore, other CM systems have a sequential ordering so migrating away from Puppet to something like Chef is more costly than it would be from Ansible to Chef.

Finally, I find Puppet manifests and modules very intimidating to someone jumping in media res, which simply isn’t true of other configuration managers. This is a big deal because I, as a developer, I plan on pawning of site operations responsibility onto someone as soon as I can con them into thinking machine maintenance is fun.

I remember watching a core developer of Puppet scratch his head to try to figure why anyone would choose Chef over Puppet. Once, After someone shat on me for liking Ansible, I stared incredulously as they recommended Chef—as every argument just used they for Chef over Ansible could be used for using Puppet over Chef. Having said that, Chef does have two advantages over Puppet. The first is that Chef recipes and cookbooks has ordered execution like you’d expect, and the second is that those are written in Ruby. However, Ansible already does the first, and I only care to know enough Ruby to edit a Vagrantfile. Gaining proficiency in another programming language just to execute commands on a machine when I already know shell scripting is a waste. Also, I’d think having a programming language at the ready instead of a markup language is a recipe (pardon the pun) for encouraging a developer to rely on it instead of learning CM best practices. In the process an adept programmer/beginner DevOps is likely to break important features of good configuration management like idempotence.

However, if you are a Ruby developer, I’d recommend Chef because you already know the markup language and because it’s highly likely the DevOps you hire (or the Ruby shop you work at next) will be using Chef as their configuration management system. Sometimes life in the bubble is good, why burst it?

With Ansible you will have to learn YAML to write tasks and playbooks and Jinja2 is the templating system. Both are trivial. Tasks will resemble, almost exactly, a command you would type on the command line so the learning curve is quick. Once in a while, a command you type might be actually two separate tasks. Oh, the humanity! The sky is falling! It connects to the machines without a client just as you would if you were the ssh into a box. In fact, the biggest difficulties I’ve found were not with finding which ansible module task emulated the command line I wanted, but in navigating the idiosyncrasies of SSH.

Salt has a core focus on scalability and performance at the cost of an agentless model, which I’ve heard rumor does actually exist in Salt. Unfortunately, nearly every tutorial steered me away from this toward the server-client one because the magic pixie dust that powers Salt—it goes by the name ZeroMQ. This is a non-starter in terms of learning curve as a developer who suffers through, rather than enjoys, DevOps. Other than that, I think they’ve made better architectural choices at every turn than all the other CM systems, including Ansible.

The biggest B.S. argument against Ansible.

I remember over dinner one day, an ops engineer scoffed when I mentioned I was using ansible for my personal projects. When I asked him what I should use instead, he said, “puppet.” When I asked him why, he said, “Ansible doesn’t scale.” For about half of my career, Puppet didn’t exist, so I told him, “ I guess no website built before 2006 ever scaled.”

Cal Henderson is fond of saying, all programming languages scale, it’s architectures that do not. In mathematical terms, this is because all programming languages are Turing complete which means that they’re mathematically homomorphic (or equivalent) to each other.

In the same way, all configuration management systems are interchangeable—at the end of the day, they’re executing just commands on the target machine. Worst case scenario, you could make an ansible playbook whose only task is to make sure the puppet agent is up to pull from its master. That’ll scale just as well as any puppet client configuration.

If someone tells you that some sort of configuration management system doesn’t scale, what they’re really saying is, “I couldn’t get that configuration management system to scale.” Which really is more an indictment on their competence, rather than the software they’re disparaging.

How do I get started

Do a web search for “ansible”, “your favorite language”, “your favorite server”, and “your favorite flavor of operating system.”

As for me, I started with Phansible and before that, this tutorial on setting up vagrant to run an ansible playbook.

One last argument

So there you have it, my argument to encourage you to learn and use Ansible if you are an internet developer who does not program in Ruby or have a preferred CM system already but has to “wear an sysadmin hat” as part of their job which involves some sort of project that might reach scale.

Yes, that’s a pretty tight set of requirements. Change any one of those and there’s just not enough of a difference between CM systems to justify one as being definitive.

But if you are still going to shit on me for recommending ansible, because insert your favorite CM system is so much better, let’s take a moment to consider that the developer I’m addressing is probably not going to use your CM system, but rather none at all. And that person might be the person who hires you someday, to that future you I say, “Would you rather port some ansible playbooks to your clearly superior configuration manager or a bunch of random shells scripts and snowflake servers?”

You can thank me for making the future you’s life significantly less miserable. 🙂

Getting Ansible to work with DigitalOcean API v2

Today I finally decided to solve something I’ve been putting off for a while. I recently migrated to reasonably-priced cloud hosting solution DigitalOcean. Version 1 of their API will stop working on November 9. The tool I use to automate my servers and developer installs is Ansible currently uses version 1 will not support version 2 of the DigitalOcean API until Version 2.0, which was supposed to have been released by now. My guess is that since it is almost November, the ansible team decided to wait until AnsibleFest to release 2.0. Unfortunately, that’s on November 19th.

So there’s a 10 day window + developer time where any playbooks I’m using will no longer work on the live site. Not cool.

So I decided to start up a project to test a working fix for this.

The first thing I needed to do was install Ansible 2.0 from source using the instructions on the website.

$ git clone git:// --recursive
$ cd ansible
$ source ./hacking/env-setup
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six
$ make
$ sudo make install

This installed Ansible into my /usr/local/bin which I verified was correct by typing ansible --version

Next I installed the python DigitalOcean API, dopy:

$ sudo pip install dopy
view raw 2_install_dopy hosted with ❤ by GitHub

Log in to DigitalOcean and get a v2 token (there is no client_id in v2) and add the below to your .profile replacing the api_token with the one you generated.

$ export DO_API_VERSION='2'
$ export DO_API_TOKEN='api_token'

Generate a RSA key pair and upload it to DigitalOcean if you haven’t already done so and copy the following provision script into provision.yml (example modified from Jeff Geerling’s Ansible for DevOps chapter 7.) remembering to modify the ssh key signature.

Then run the script with the command:

$ ansible-playbook provision.yml
view raw 5_run_playbook hosted with ❤ by GitHub

This will create an ubuntu droplet and log in and run a install on it all automated like. You’ll notice that unlike Jeff’s script, you can refer to regions, images, sizes, and keys by name instead of looking up esoteric numbers for them.

Good luck and have fun!

Remembering Mister Rogers

Marie posted this link of Mr. Rogers:

It reminded me how I was fortunate enough to have met him.

My mom’s side is Catholic, but my Dad’s side is Presbyterian—Dad’s family, not Dad—Dad is what my mom liked to call a Seventh-day Absentist—every seventh day, he was absent from church. After Ken was confirmed Mom would allow us to go to either church. In high school, when my brother had a car, this meant trips every Sunday to the Korean Presbyterian Church.

Mr. Rogers Neighborhood was filmed at the local public television station of Pittsburgh and he was ordained a Presbyterian minister. He belonged to the Sixth Presbyterian Church of Pittsburgh located in Squirrel Hill. At the time, in the afternoon on Sundays the Korean Presbyterian Church hadn’t scraped enough money yet to buy their own church so the services would be out of the Sixth Church. Sometimes Mr. Rogers would stay late for Korean Sunday school kids.

One time he made a guest appearance with us high schoolers. He sat down and had a suitcase with all his puppets on his lap. We’d ask him to do all the voices of our childhood: King Friday, Queen Saturday, Henrietta Pussycat, etc., and with a nervous smile, he’d reach into the suitcase and the requested character from the Neighborhood of Make Believe would pop up from behind the open case and address us. Even Daniel Striped Tiger made an appearance even though he was very worn-through and extremely shy.

Some people are exactly who they appear to be, and Mr. Rogers was one of them. It was pretty awesome.

He was pretty awesome. 🙂