Categories
Uncategorized

DevOps Insights from REDtalks 14

I recently had the good fortune to encounter Tom McGonagle, SE with F5, via the Boston DevOps chatroom, moderated by Dave Fredricks.  I had been invited to post in Dave’s newly inspired mentor/mentee topic channel, which I welcomed as I had been looking for guidance around a side project of mine.  Tom contacted me through chat, and before the morning was out, we were enjoying a crisp pair of pizzas, the artful pies you can only get in downtown.

We exchanged impressions on working in the tech industry, on the big-hearted, quirky and iconic culture that makes being an engineer among engineers so incredibly rewarding.  We concluded with an invite from Tom to one of the meetups he co-organizes, Hackernest in Artisan’s Asylum, so I marked my calendar and went on my way.

Before the week was out, Tom sent me a link to REDtalks #14: Tom & David on the Principles & Practices of DevOps with host Nathan Pearce, featuring Tom along with fellow DevOps specialist and Bentley U alumnus David Yates.

When I sat down to listen, I expected an informative piece with some new-to-me tidbits here and there.

This podcast captivates me.  Rather than listening passively from one end to the other, I found myself skipping back and forth to make sure I was getting exactly what is being said.  For Tom specifically as the one who reached out to me, congratulations – this is fantastic.

Here are my (extensive!) notes from this most excellent podcast.

Yates – 6:10DevOps Handbook by Gene Kim and the three ways

  1. continuous delivery – testing and QA as a first class object, how do you pull that left in the pipeline and do it early, often, iteratively and incrementally
  2. continuous intelligence – how do you pull it all into a central location and make sense of what was happening in your application and infrastructure
  3. continuous learning – “fail early and fail often”, don’t be afraid to take risks, you can only learn by practicing and getting better, experimentation as culture, that includes getting the components of the infrastructure to harmonize with each other

Yates – 11:30 – teams uniting around a common mission

  • Quarter over quarter, having a common goal as to how the team can get better.  One of those goals can be customer education.

OKRs – Google’s term, objectives and key results

McGonagle – 12:31 – CAMS

  • CAMS are culture, automation, monitoring, and sharing.  Sharing is critical as a devops engineer, devops consultant, or Devos SME at F5, there is a fiduciary responsibility to share these idea viruses.  One of the idea viruses that I’m hot on right now is the idea of agile networking, it’s my language around the application of agile and devops principles to the field of network engineering … it’s part and parcel of being part of the devops community, you have to share.  As part of my sharing, David and I organize the Boston area Jenkins Meetup group – largest area Jenkins meetup group in the world.  It’s part of getting out into the community and getting people aware and interested in DevOps.

McGonagle – 14:00 – 9 Practices of DevOps

Practice 1: 14:15 – Configuration Management – you can templatize your configurations and drive your autonomic infrastructures that self-build, self-configure and self-automate

  • Question from Yates on Practice #1: 16:20 – What are the best practices around Configuration management?
  • Answer about best practices from McGonagle at 16:40 –  use facts to drive your configuration, intelligence gathering about the server, self-identifying and self-configuring

Yates – 21:00 – the big motivators for devops is that it’s the marriage of modern management and IT best practices, positive feedback between business requirements and IT delivery

Yates – 21:31 – business reasons that gives DevOps legs

Yates – 21:45 – DevOps from all points of view, IT best practices

Practice 2: 22:59 – Continuous integration – a robot such as Jenkins that takes your code from a source code management repository and builds it and tests it in a continuous way, every time a developer commits code the robot tests it against the functional and unit tests, it enables the developers to have awareness of the quality of the code

  • McGonagle – 25:40 – Linting – check the code for the appropriate format, which eliminates an enormous amount of errors, a test that can be orchestrated through a tool like Jenkins

Practice 3: 26:40 Automated testing – TDD, test driven development, build the test into your CI infrastructure, “write the unit test before the code”

  • Yates – 27:53 – TDD is one of the core principles of the XP Agile framework, make sure you know it works before you roll it out, especially for security

Practice 4: 29:15 – Infrastructure as Code – software project for your infrastructure with all the benefits applied to infrastructure, infrastructure is programmable and extensible, saves time and validates the process

  • Yates – 34:14 – canary release – don’t put out a new release everywhere at once, put it out in an isolated deployment so it can be rolled back quickly, if it succeeds then roll it out more widely

Practice 5: 35:40 – Continuous delivery – the way the code is rolled out, there’s a button that’s pushed to release – do you push a button to release?

Practice 6: 35:40 – Continuous deployment – the code contantly goes to production – do you create a button to release?

Practice 7: 18:16 – Continuous monitoring – metrics driven devops, APM – application performance monitoring, instrumenting your code to expose various qualities about your code and infrastructure to a metrics gathering tool

  • McGonagle – 39:27 – ACAMS+ -> add in Agile to culture, automation, monitoring and sharing and what is important to you

Practice 8: 40:30 – Develop an engaged and inclusive culture to encourage collaboration and shared ownership

  • Tom’s Amish barn raising post , culture in which all teams are working toward the same goal
  • Yates – 41:44 – students run three sprints using scrum, the most important thing you can do is own the product you’re going to deliver, having empathy for teammates, easier to say than do

Practice 9: 43:47 – Actively participate in communities of practice to become a lifelong learner of technology development (don’t be a jerk!) – going to conferences, being a speaker, a good participant, a nice person, a listener, the benefit is the learning opportunities it creates

My final takeaway is I am humbled by the privilege of being able to work in an industry distinguished by a culture of enthusiasm, passion and ownership.

While no profession can be exempt from drudgery, the devops culture of cheerful collaboration has, by virtue of its effectiveness, become an accepted prerequisite for deploying a successful product.  As a result, the typical corporate cynicism is mitigated and even replaced by an expressive and generous optimism.  Innovative and disruptive indeed.

Categories
politics

Darkness, Redemption and the President

I was devastated on Wednesday by the election turnout. When I came home from work I curled up in bed with the lights out. I yelled at Jeff because he wasn’t angry enough. It was painful to me that he didn’t seem to be hurting like I was.

For hours that night I let the darkness shrivel me up and push him away.  I refused to give him our customary good night kiss. That hurt him, and it made me feel good that I hurt him.

At some point that night, in the dark, I realized this was not a path I wanted to go down.
[pullquote class=”left”]Terrorists win if we are terrified to live our lives. Hatred wins if we hate the people who share our lives.[/pullquote]I love many people who disagree with me. A point of pride, one I brag about, is that Jeff and I can love each other while disagreeing on most things. But what can unite us when we disagree on something so fundamental? If I question his conscience, are we even compatible anymore?

Terrorists win if we are terrified to live our lives. Hatred wins if we hate the people who share our lives.  I can’t love only part of Jeff, or cherry-pick what parts of him I think are ok to love.  And I can’t do that to them either.

I’m not speaking hypothetically, or generically. These are actual people, family, friends, who depend on me and who love me. How can I let them down by blaming them for a situation I already refused to own? I let Trump get elected, this is ultimately on me. Am I speaking figuratively or collectively? Probably not as much as I’d like to think.


TO PROTEST

Let’s talk protests. Jeff offered to go with me. I ultimately decided against protesting … for now.

[pullquote class=”left”]By protesting an outcome we recognize as fair, we are weakening the impact of protests to come.[/pullquote]Why not protest? What is there to protest right now? If Hillary Clinton had been elected, there would be no basis for protesting, so why is there one now? I want to protest injustice in the system, not outcomes. We should not protest the rules because our team lost by them, and no one should protest the mere fact of Trump being president.  By protesting an outcome we recognize as fair, we are weakening the impact of protests to come.

Ultimately, Jeff and I care about the same things. We disagree on how to get there, but we are fundamentally united in our agreement on principles of behavior, government and ethics.

Not everyone who supported Donald Trump agreed with him on principle.  Some supported him because they considered Hillary Clinton to be a worse threat to the United States, or because they considered Trump’s economic policies to be beneficial (whatever those might be).  For those people, the above statement applies, because the underlying principles of their decision were aligned with my own.

For the others who did agree with Trump’s principles, as far as I am concerned, it comes back to the Christian principle of love thy neighbor as thyself.  This is purely because my personal spirituality includes having faith in that principle. I’ve looked down the other path, and it’s not something I would want for myself or anyone else I care about.

In the meantime, regroup, reorganize, blog, let your voice be heard. Save your strength, because the times are coming when we will have injustices to protest, and targeted lives to defend.  Let them come.

Categories
Uncategorized

Hello Acquia

As of Dec. 5 2016, I will be starting with Acquia as a Cloud Engineer, supporting their Drupal Cloud on AWS. I am super stoked to be applying my experience in scalability, virtualization, and continuous integration to Acquia’s cloud offering.

Acquia has been around since 2007 and they’ve been kicking ass ever since. Read more at Acquia Pipelines: Build, Test, and Deployment Automation for Acquia Cloud

Categories
Uncategorized

Resolving Hadoop Problems on Kerberized CDH 5.X

I ran into a problem in which I had a Kerberized CDH cluster and couldn’t run any hadoop commands from the command line, even with a valid Kerberos ticket.

So with a valid ticket, this would fail:
hadoop fs -ls /
WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

Here is what I learned and how I ended up resolving the problem. I have linked to Cloudera doc for the current version where possible, but some of the doc seems to be present only for older versions.

Please note that the problem comes down to a configuration issue but that Kerberos itself and Cloudera Manager were both installed correctly. Many of the problems I ran across while searching for answers came down to Kerberos or Hadoop being installed incorrectly. The problem I had occurred even though both Hadoop and Kerberos were functional, but they were not configured to work together properly.

TL;DR

MAKE SURE YOU HAVE A TICKET

Do a klist from the user you are trying to execute the hadoop command.

sudo su - myuser
klist

If you don’t have a ticket, it will print:

klist: Credentials cache file '/tmp/krb5cc_0' not found

If you try to do a hadoop command without a ticket you will get the GSS INITIATE FAILED error by design:
WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

In other words, that is not an install problem. If this is your situation, take a look at http://www.roguelynn.com/words/explain-like-im-5-kerberos/ . For other troubleshooting of Kerberos in general, check out https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/errors.html


CDH Default HDFS User and Group Restrictions

A default install of Cloudera has user and group restrictions on execution of hadoop commands, including a specific ban on certain users ( more on page 57 of http://www.cloudera.com/documentation/enterprise/5-6-x/PDF/cloudera-security.pdf ).
There are several properties that deal with this:

Specifically for user hdfs, make sure you have removed hdfs from the banned.users configuration property in hdfs-site.xml configuration if you are trying to use it to execute hadoop commands.

1) Unprivileged User and Write Permissions

The Cloudera-recommended way to execute Hadoop commands is to create an unprivileged user and matching principal, instead of using the hdfs user. A gotcha is that this user also needs its own /user directory and can run into write permissions errors with the /user directory. If your unprivileged user does not have a directory in /user, it may result in the WRITE permissions denied error.

Cloudera Knowledge Article

http://community.cloudera.com/t5/CDH-Manual-Installation/How-to-resolve-quot-Permission-denied-quot-errors-in-CDH/ta-p/36141

2) Datanode Ports and Data Directory Permissions
Another related issue is that Cloudera sets dfs.datanode.data.dir to 750 on a non-kerberized cluster, but requires 700 on a kerberized cluster. With the wrong dir permissions set, the Kerberos install will fail. The ports for the datanodes must also be set to values below 1024, which are recommended as 1006 for the HTTP port and 1004 for the Datanode port.

Datanode Directory

http://www.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_hdfs_cluster_deploy.html

Datanode Ports

http://www.cloudera.com/documentation/archive/manager/4-x/4-7-2/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_enable_security_s9.html

3) Service Specific Configuration Tasks

On page 60 of the CDH security doc, there are steps to kerberize Hadoop services. Make sure you did these!

MapReduce

sudo -u hdfs hadoop fs -chown mapred:hadoop
${mapred.system.dir}

HBase

sudo -u hdfs hadoop fs -chown -R hbase ${hbase.rootdir}

Hive

sudo -u hdfs hadoop fs -chown hive /user/hive

YARN

rm -rf ${yarn.nodemanager.local-dirs}/usercache/*

All of these steps EXCEPT for the YARN one can happen at any time. The step for YARN must happen after Kerberos installation because what it is doing is removing the user cache for non-kerberized YARN data. When you run mapreduce after the Kerberos install it should populate this with the Kerberized user cache data.

YARN User Cache
http://stackoverflow.com/questions/29397509/yarn-application-exited-with-exitcode-1000-not-able-to-initialize-user-directo

Kerberos Principal Issues

1) Short Name Rules Mapping
Kerberos principals are “mapped” to the OS-level services users. For example, hdfs/WHATEVER@REALM maps to the service user ‘hdfs’ in your operating system only because of a name mapping rule set in the core-site of Hadoop. Without name mapping, Hadoop wouldn’t know which user is authenticated by which principal.

If you are using a principal that should map to hdfs, make sure the principal name resolves correctly to hdfs according to these Hadoop rules.

Good
(has a name mapping rule by default)

  • hdfs@REALM
  • hdfs/_HOST@REALM

Bad
(no name mapping rule by default)

  • hdfs-TAG@REALM

The “bad” example will not work unless you add a rule to accommodate it

Name Rules Mapping
http://www.cloudera.com/documentation/archive/cdh/4-x/4-5-0/CDH4-Security-Guide/cdh4sg_topic_19.html

2) Keytab and Principal Key Version Numbers Must Match
The Key Version Number (KVNO) is the version of the key that is actively being used (as if you had a house key but then changed the lock on the door so it used a new key, the old one is no longer any good). Both the keytab and principal have a KVNO and the version number must match.

By default, when you use ktadd or xst to export the principal to a keytab, it changes the keytab version number, but does not change the KVNO of the principal. So you can end up accidentally creating a mismatch.

Use -norandkey with kadmin or kadmin.local when exporting a principal to a keytab to avoid updating the keytab number and creating a KVNO mismatch.

In general, whenever having principal issues authentication issues, make sure to check that the KVNO of the principal and keytab match:
Principal
kadmin.local -q 'getprinc myprincipalname'

Keytab
klist -kte mykeytab

Creating Principals
http://www.cloudera.com/documentation/archive/cdh/4-x/4-3-0/CDH4-Security-Guide/cdh4sg_topic_3_4.html

Security Jars and JAVA Home

1) Java Version Mismatch with JCE Jars
Hadoop needs the Java security JCE Unlimited Strength jars installed in order to use AES-256 encryption with Kerberos. Both Hadoop and Kerberos need to have access to these jars. This is easy to miss because you can think you have the security jars installed when you really don’t.

JCE Configurations to Check

  • the jars are the right version – the correct security jars are bundled with Java, but if you install them after the fact you have to make sure the version of the jars corresponds to the version of Java or you will continue to get errors.
    To troubleshoot, check the md5sum hash of the JCE jars from a brand new download of the same exact JDK that you’re using against the md5sum hash of the ones on the Kerberos server.
  • the jars are in the right location ( JAVA_HOME/jre/lib/security )
  • Hadoop is configured to look for them in the right place. Check if there is an export statement for JAVA_HOME to the correct Java install location in /etc/hadoop/conf/hadoop-env.sh

If Hadoop has JAVA_HOME set incorrectly it will fail with GSS INITIATE FAILED. If the jars are not in the right location, Kerberos won’t find them and will give an error that it doesn’t support the AES-256 encryption type (UNSUPPORTED ENCTYPE)

Cloudera with JCE Jars
http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cm_sg_s2_jce_policy.html

Troubleshooting JCE Jars
https://community.cloudera.com/t5/Cloudera-Manager-Installation/Problem-with-Kerberos-amp-user-hdfs/td-p/6809

Ticket Renewal with JDK 6 and MIT Kerberos 1.8.1 and Higher

Cloudera has an issue documented at http://www.cloudera.com/documentation/archive/cdh/3-x/3u6/CDH3-Security-Guide/cdh3sg_topic_14_2.html in which tickets must be renewed before hadoop commands can be issued. This only happens with Oracle JDK 6 Update 26 or earlier and package version 1.8.1 or higher of the MIT Kerberos distribution. To check the package, do an rpm -qa | grep krb5 on CentOS/RHEL or aptitude search krb5 -F "%c %p %d %V" on Debian/Ubuntu.

The workaround given by Cloudera is to do a regular kinit as you would, then do a kinit -R to force the ticket to be renewed.
kinit -kt mykeytab myprincipal
kinit -R

And finally, the issue I actually had which I could not find documented anywhere …

Configuration Files and Ticket Caching


There are two important configuration files for Kerberos, the krb5.conf and the kdc.conf. These are configurations for the krb5kdc service and the KDC database. My problem was the krb5.conf file had a property:
default_ccache_name = KEYRING:persistent:%{uid}

This set my cache name to KEYRING:persistent and user uid ( explained https://web.mit.edu/kerberos/krb5-1.13/doc/basic/ccache_def.html ). When I did a kinit, it created the ticket in /tmp because the cache name was being set elsewhere as /tmp. Cloudera services obtain authentication with files generated at runtime in /var/run/cloudera-scm-agent/process , and these all export the cache name environment variable ( KRB5CCNAME ) before doing their kinit. That’s why Cloudera could obtain tickets but my hadoop user couldn’t.

The solution was to remove the line from krb5.conf that set default_ccache_name and allow kinit to store credentials in /tmp , which is the MIT Kerberos default value DEFCCNAME ( documented at https://web.mit.edu/kerberos/krb5-1.13/doc/mitK5defaults.html#paths )

Liked this post and want to hear more? Follow me at https://twitter.com/saranicole and connect at https://www.linkedin.com/in/sarastreeter

Cloudera and Kerberos installation guides

Step-by-Step
https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_intro_kerb.html
Advanced troubleshooting
http://www.cloudera.com/documentation/enterprise/5-6-x/PDF/cloudera-security.pdf , starting on page 48

Categories
Uncategorized

Update on Minecraft on Digital Ocean

DevOps Day is this week! For the presentation I have a brand-y new Ansible playbook up on Github that lets anyone roll their very own Minecraft server on Digital Ocean.

Check it out at github.com/saranicole/stem-minecraft. You can also see how the playbook works in action on Asciinema at asciinema.org/a/2gojihwmv3k8urg2oujppe66q.

*Even Later Update:
Slides are posted at http://slideshare.net/saranicole1980/building-stem-with-minecraft
Gorgeous “Eleanor” Powerpoint Template available at http://www.slidescarnival.com/eleanor-free-presentation-template/308
Video: https://www.youtube.com/watch?v=FsfjWMs67DE
DevOps Days Boston Speaker profile https://www.devopsdays.org/events/2016-boston/program/sara-jarjoura/

Enjoy!

Categories
code

From Zero to DevOps through Minecraft

I am blessed with a pretty awesome family, and as part of that awesomeness it just so happens that my sister’s two kids have a passion for Minecraft.  Her oldest, Cyrus, is 9 years old and had on his own already gotten into the more technical aspects available in the PC version of Minecraft.  When I saw what he was doing with the basic functionality, I figured it would be fun for him to take it to the next level and add some DevOps-themed wizardry to his Minecraft chops.

Minecraft is Already a Learning Playground

It’s worth it to mention that I myself am passionate about Minecraft, both for my own enjoyment and also for its success as a teaching platform.  Minecraft gives kids a context from which they end up almost accidentally developing a massive variety of skills, many of them technically oriented.  Minecraft encourges resourcefulness, initiative, curiosity … it’s discovery through play, the very best kind of learning.

Here are just a few examples of what my sisters’ kids learned on their own through Minecraft without any adult intervention:

The six year old:

  • How to recognize words – Minecraft’s crafting heads up display includes the name of the tool above its picture
  • How to be constructive – when she first started playing, all she wanted to do was tear apart what her brother had built.  Eventually destruction became boring so she started her own creations
  • Fair play – you learn really quickly that what you can do to others they can – and will – do back to you

The nine year old:

  • Resourcefulness and Teaching Ability – When I first started playing Minecraft, it was not obvious to me why you had to seek out recipes for crafting objects.  To make a pickaxe, for example, you create sticks and combine them in a pattern with cobblestone.  I thought that this barrier would turn off beginners who wouldn’t want to have to figure out how to make things.  Then I realized that discovery of new crafting recipes is the very point of the game. 

    When Cyrus wants to build something in Minecraft, he searches the Minecraft wiki, looks for similar projects on Youtube, and reads relevant content.  By doing this, not only has he learned to educate himself, he has also learned to educate others effectively.  When he wants to show me how to do something in Minecraft, he starts at a simple example and then builds on that with increasingly complicated iterations. I’ve seen professionals on stage giving demos that haven’t learned this yet.

  • Rudimentary backup/source control – Cyrus discovered that by using the Minecraft console (accessible in the PC with the slash “\” character), he could quickly copy any structure he created to a different set of coordinates.  This gave him a way to “save” his work at a certain stage of development. He would build something complex, such as a castle, in a large rectangle until he was moderately satisfied. Then he would use a console command to copy what he had onto a separate area.  That gave him the freedom to experiment with the structure, and if he didn’t end up liking it, he could restore the earlier version.  Clearly not yet actual source control, but the fundamental idea is already there.
  • Programming logic gates – Minecraft has control blocks which can be placed next to each other in order to chain together their output.   This, in addition to the redstone wiring component, is advanced enough that whole functioning computers can be built within the virtual world of Minecraft. I still don’t really understand how to use these things – despite Cyrus patiently explaining them to me …

Taking it to the Next Level

I decided we needed a Minecraft server of our own so that the three of us could join up in our own private Minecraft world.  Prior to this Cyrus was playing on his own Minecraft world in single player mode, which does not allow for collaboration.  I’ll describe the steps I took to set up the server so that Cyrus could start administering it.

I started by renting a droplet from DigitalOcean and enabling it for Docker.  DigitalOcean does a good job of making this easy – all I had to do was check the box and it appeared. By default you get a password to connect to the droplet over SSH, so the first thing I did was set up passwordless SSH from my PC to the remote server.  I generated a default SSH public key with ssh-keygen, then copied it to an authorized_key file in my Linux user’s .ssh directory, making sure the permissions were correct. Then I logged out and back in, confirming that I could SSH in without typing a password.  

Next up was finding a decent Docker image for Minecraft.  Now I could have messed around with installing Java and fetching the Minecraft server jar to run straight from the machine, but Docker makes it ridiculously easy to fetch pre-configured environments … so why worry?

I pulled up Docker Hub and searched for Minecraft.  There were a few choices there, but also a clear winner – itzg/minecraft with over 500k pulls.  The thorough documentation on this image is first class effort on the part of the developer – a role model for the rest of us.

I went ahead and executed a docker run as described in the excellent instructions, and within a little over a minute had a fully functional Minecraft server.  I went back and tweaked the command a bit, ending up with this:

docker run -e EULA=TRUE -e MODE=creative -e 'JVM_OPTS=-Xmx1024M -Xms1024M' -v /home/sara/minecraft/data:/data -d -p 25565:25565 --name creative itzg/minecraft-server

I found the Java opts setting to be a definite necessity, as we experienced laggy play without it.

To make things easy, I also registered a free duckdns url so we could avoid having to type in the IP address all the time.

The Making of an Admin

At this point we were able to play multiplayer Minecraft by accessing the url as a server from the Minecraft client.  Success! However the whole point of this was for Cyrus to learn some fundamentals of how to manage a Minecraft server.  His first step was installing Cygwin on his Windows laptop. Next he added his PC’s public key to the remote server, following the same procedure I had gone through at the beginning.  Once that was set up, he was able to SSH in and run the command “docker start creative” and “docker stop creative” to start and stop the server.  

He also learned the basics of vim, enough to edit the server.properties file and configure the Minecraft world any way he wanted.  This was quite an achievement, since vim usage is one of the more esoteric arts in the world of Unix.    His first act as super administrator was to enable PvP mode, or player vs player, in which your in-game avatar can injure or kill other avatars.  This of course was the ultimate satisfaction – what better reward for your time and hard work than finding a new way to troll your younger sibling …

Containers to the Rescue

Cyrus wanted to understand the commands he was typing in, so I gave him an analogy for the technical explanation of what was happening.  I explained that we were using a Minecraft server in a Docker container, which I likened to a Twinkie in a plastic wrapper.  You could bake up a Twinkie yourself, or you could find one pre-made and packaged. Setting up a Minecraft server is like baking a Twinkie, while running a Docker container is getting the pre-packaged one. The analogy breaks down a bit since with this Twinkie, you never remove it from its plastic wrapper, but instead enjoy the cream filling from the outside without ever breaking open the box.

One thing I didn’t explain to Cyrus directly but that he did learn to appreciate in a practical way is Docker mapping.  The cream filling in this case is a volume and a port – the local directory in which to store the Minecraft world and the port on the DigitalOcean droplet that is accessible to the outside world (and us). By mapping a Docker data volume to a directory on the host machine and a Docker port to the host port, we were able to benefit from all the functionality of the application running inside the container from the outside (read more here).

At one point we had to destroy the Minecraft server so we could re-create it running in a different configuration.  Would Cyrus lose all the work he had put into the world? The answer was a big No! He found out that we could – and should – treat the server and the data it generated as separate components. Minecraft does a good job of maintaining this separation on its own, since the server writes the worlds each to its own folder. Docker gave us the ability to delete and recreate the server easily whenever we wanted, and by running the container mapped to a local volume, it wouldn’t affect his creations.

How Much DevOps?

In a short amount of time, Cyrus learned the fundamentals of maintaining his very own server.  Minecraft gave him an incentive and and a long term reward in exchange for this investment of effort.  Let’s ask the question though, how much DevOps did he actually learn?

The particularly DevOps-themed concepts here revolved around accessing a remote server on a cloud infrastructure, and making use of an application, complete with its own special snowflake environment, that some other person already dealt with for us.  We didn’t have to spend much time at all finding a host for the server or setting up the server itself.  Instead, we went straight to the fun part, playing Minecraft and watching him kill his sister (in -game of course) …

As a DevOps professional and enthusiast, it’s thrilling to me to have a member of the family who is so eager to investigate new technical skills and concepts.  I am also deeply impressed and in awe of the magnitude of his parents’ success in raising such inquisitive, bright, and motivated young people.  Finally, I have to hand it to Markus Persson (better known as Notch, creator of Minecraft) for conceiving and implementing a game that is incredibly fun to play, all while quietly preparing kids to be successful in the real world.  

Categories
Uncategorized

DevOps Engineering at Teradata

Teradata labs

In May I switched companies and tech fields – I am now working as a DevOps Engineer at Teradata.  During my last year at Axeda I had the opportunity to work on projects dealing with Amazon Web Services and EC2, and I became fascinated with the idea of managing infrastructure in code.  I am more on the “Dev” side than the “Ops” side of “DevOps”, as that is my background, however my experience with Linux at Free Geek Providence gives me an advantage when dealing with systems.

I have now completed my first project at Teradata for deploying Hadoop clusters.  It’s a Node.js app that fronts a set of Chef cookbooks to deploy any of four different distributions of Hadoop on a virtual machine cluster.  The problem that it solves is that Teradata has products that need to work well on various flavors and versions of Hadoop, and the people who test these combinations have a hard time creating the dev environments.  They would have less trouble with the better known vendors such as Hortonworks or Cloudera, but the less commonly used vendors such as IBM BigInsights or MapR would present challenges in setting up due to their unfamiliarity.

The app I created is similar in purpose to Cloudbreak, however Cloudbreak  (at the time of writing) is only intended for use with Ambari-based vendors such as Hortonworks or BigInsights.

The best part is that this first project is only the tip of the iceberg in learning about cutting edge big data technologies.  Next up I will be learning Openstack, which is an open source solution for creating an entire cloud infrastructure – think creating your own private Amazon EC2.  Openstack is on track to become increasingly important as companies look to take back control of their infrastructure.

Shameless plug:  my Boston-based team is hiring, check out our Teradata careers page and say hi to me on LinkedIn or Twitter .

Categories
internet of things technology

Internet of Things Developer Days with Intel

Back in May we ran an Internet of Things Developer Day at the Axeda industry event Connexion with Intel Galileos and Raspberry Pis. A Developer Day is a hands-on workshop where developers get a guided experience with Axeda coaches while connecting microcontrollers to sensors and sending up data. It just so happened an Intel rep was there and had such a great time that we ended up partnering with Intel to organize a road show for the fall. We did one in New Jersey two weeks ago and we have now finished our first one in Silicon Valley.

Our technical team consisted of Kevin Holbrook, Joe Biron, Chris Meringolo, Allen Smith and Haris Iqbal all from Axeda, Howard Alyne from Wind River and Val Laolagi from ThingWorx. Intel supported us on the administrative side so we were able to focus purely on the content.

We had about 70 developers connect Galileos using our Axeda Developer Toolbox, which allows you to pick from any of about 25 Axeda Ready devices and get a self-guided tutorial on how to send data up to our cloud. The Galileos ran a proprietary Wind River version of Yocto Linux, which has cool security features baked in such as application signing and device identity key verification. The baseline for completing the tutorial was a round trip for the data, sending up light, sound, and temperature readings from the board and then triggering an action on the board from the app – in this case a blinking LED and a buzzing buzzer. We sent the developers home with documentation on an advanced path which took them through the ThingWorx dashboard, as well as a sample app that did AJAX calls to the Axeda RESTful web services.

It’s a gratifying experience for me to be able to coach developers past the initial hurdles of connecting a device. One student in particular whom I was helping had a Galileo board that was not able to get a serial connection to his PC over the COM port. I was able to log into the Axeda platform and see the IP address it was sending up as data, which we were then able to use to SSH into the board. A few minutes later he had his first circuit built with an LED, and by entering “blink 5” into the Toolbox app he saw the Axeda agent receive the downstream command and then blink the LED five times. After only a few more minutes he had his buzzer triumphantly buzzing as well, and then high five!

My key takeaway from these events is that developers are hungry to get experience with hardware and learn what the Internet of Things really is beyond the hype. Holding events like this one allows developers to find the meaning behind the buzz words and start laying the foundation for the future of their companies, one LED at a time.

For more information for Axeda Developers, check out http://developer.axeda.com .

Categories
Uncategorized

Hackathon – Sara’s Rules of Thumb

Hackathons are short day or days-long events in which hackers prototype a technical solution to a business problem and then pitch and demo that solution to an audience. I’ve been a coach and organizer for three Hackathons and I’ve learned a few rules of thumb that make Hackathons more fun and valuable for the participants.

chuck-norris-thumbs-up

1) Make the device + sensors combination awesome

The target audience for the Hackathons we run are hardware hackers as well as IoT hackers, and if you get the right device – say the newest release of Arduino plus an a la carte selection of 50 sensors – their eyes light up and the creative juices start flowing. Pick the wrong device and you get to hang out with them on the couch wondering how to help them figure out what to do.

2) Code shoulder-to-shoulder with the hackers

Don’t show up and put your feet up on the table and say, “well I’m here so I’ve done my job!” No, get down and get dirty in the code, help out with code snippets and make things *work*. They’re here to turn an idea into reality and the best outcome is if the help of the coaches is available to make everyone successful. Let the best idea be implemented the best way it can and whaddya know cool things will happen.

3) Pick the right judges

Judges hold the power of the purse over the heads of the hackers. Prize money goes where the judges deem it worthy to go. For that reason you want judges who represent a few different perspectives – a business guy, a software gal, a hardware person, an investor – so the hackers have the best chance to impress at least one of them with their demo. Judges can give value to hackers simply by offering plenty of feedback, so make sure to select for experience and expertise in their area.

4) Help them with their demo

Putting a solution together is only part of the challenge of participating in a Hackathon, the other part is presenting it in an exciting and professional way. While working with hackers on the technical implementation, don’t forget to ask them if they have a slide or two on their project that explains its business value. Ask if they have a business-oriented team member and if they don’t, find them one.

Hackathons are fun and sometimes extreme events with coding going all through the day into the night and the morning. The experience alone makes it worth participating in one and who knows, you might be surprised at what you can put together!

Categories
geek

All about Free Geek Providence

I’m on the Board of Directors for a nonprofit called Free Geek Providence based in RI. Free Geek Providence provides a call to refurbish, reuse and recycle older computers. The organization also promotes the use of Linux, an open source operating system, as well as the open source ecosystem around it. I joined Free Geek Providence when it was starting up in 2008 and found it a perfect fit.

Access into modern society via technology is the new divide between the haves and the have-nots. As a result of this divide, we need to build bridges to bring back those people on the outside and introduce them to the skills needed as part of a computerized workforce.

The work we have done giving out free computers has benefited many nonprofits and individuals. Rhode Island Nurses Institute or RINI became the pilot for our Adopt a Classroom program. We installed 14 computers in a computer lab that students without computers at home could use to do their homework. An added benefit was exposure to the free operating system Linux. At first the students were uncomfortable using Linux because they were used to Microsoft Windows, but they soon made the switch and had no problem. Now those same students know that Linux and open source software is out there and can take advantage of it for the rest of their lives.

Follow freegeekpvd on Twitter and learn more on Facebook!