Tag Archives: Linux

CentOS Dojo, Brussels 2016

On Friday I attended my first Centos Dojo in Brussels , Belgium held one day before FOSDEM. The timing fits in well for the Open Source community since you can attend this, FOSDEM and Configuration Management Camp as part of one trip. The official Centos events page will have links to the videos and slides, but I wanted to share these notes that I made from the day.

What is a dojo? The Centos Dojo website states that “The CentOS Dojo’s are a one day event, organised around the world, that bring together people from the CentOS Communities to talk about systems administration, best practises in linux centric activities and emerging technologies” and it certainly lived up to these expectations. The good: each presentation was about an hour long, the speakers knew their topics, many of the sessions were interactive allowing for questions and discussions.

The day began with Karanbir Singh introducing State of the CentOS Project. During the Q&A, a couple of interesting things were raised. Firstly, someone had issues with the graphical updater failing after an update to Centos 7.2. Karanbir suggested that many things were re-based (ie underwent a major upgrade) in RHEL / Centos 7.2 including many of the Gnome packages. It sounded as though these types of major update may become more commonplace in the future. On the one side people want reliability and stability, but at the same time they want newer functionality, so things like a graphical updater breaking may be an occasional casualty. The advice is raise bugs where ever possible.

There was another question around Centos providing an Extended Update Support (EUS). Essentially, this boiled down to asking Red Hat to make those EUS updates available as source RPMs. If they do that, then Centos could package these. There are 4 components at play in this discussion: Copyright, License, Contract and Intellectual property. Red Hat are one of the better companies for publishing Open Source code, but the EUS packages are not one of them and anyone using them has probably agreed in their contract not to redistribute them. As yourself this: do other Operating Systems such as SuSE and Ubuntu provide their sources in an easy-to-consume manner?

Why is a package in RHEL but not in Centos? Answer: check the Centos Release notes – they will state why a package cannot be built for Centos (for example, for branding reasons).

For the future of Centos? They want to enable future technologies. To facilitate this they’ve released Centos for 32-bit x86, ARM32 build as armv7, ARM64 built as aarch64, POWER8 Little Endian as ppc64le and POWER7 Big Endian as ppc64. They key thing about all of these different architectures is that the final release is familiar and consistent across the board. These provide a platform for Openstack to build on with guaranteed updates – and with groups such as the Cloud Special Interest Group working together things like the an update from Openstack or Centos shouldn’t break anything.

Karanbir was very enthusiastic about the Centos Continuous Integration (CI) platform. “The CentOS CI is a public resource that open source based projects can use for integration tests on bare metal hardware. The goal of the project is to be a resource for communities that build on top of CentOS in order to enable them to perform better automated testing.” In simple terms, you could put a docker file into the system, and you’d get a container out which would be fully tested.

On a final note, Karanbir also asked if anyone had contacts at Digital Ocean to help get the Centos team work with them to provide updates for Digital Ocean images.

Links:
Karanbir Singh Website
Karanbir Singh on Twitter

I then attended a number of other sessions. Here are the notes and links!

Relax-and-Recover simplifies Linux Disaster Recovery by Gratien D’haese

Relax-and-Recover (Rear) is now included in RHEL / Centos 7.2 so it’s now easy to install. After you’ve set things up, remember to continually: Rehearse, maintain, review. After all, what good is a backup plan if you’ve never used it and don’t know it works. Essentially you produce one image per server (physical or VM). The image will contain basic information such as disk partitioning details and the IP address to use. You could create a standard image if all your servers are the same, but remember that you’ll need to change your IP address in the kernel options at boot time. Also, if you’re integrated with TSM or another backup solution, the configuration file will need updating. So perhaps one image per server is good. Or maybe use a dedicated IP address which you always perform restores to. Once the server is recovered, you could then move it to an alternative address. You always need to rerun rear if partitions change. In terms of backup, it will not make a note of any SAN details by default. Rear has been removed from EPEL because it is now part of the core RHEL / Centos 7 base distribution.

Desktop security, keeping the keys to the castle safe by Michael Scherer.


Sometimes it’s necessary to step back and make sure you are doing all you can to keep your systems secure. As a sysadmin you may well have access to confidential information or trade secrets which makes you a rich target. Sure, there will always be low-skilled, automated attempts to login to your systems. But an advanced persistent threat (APT) is a network attack in which an unauthorized person gains access to a network and stays there undetected for a long period of time. How we can we protect ourselves and our companies?

Run a laptop: chose free, open source software beginning with the Operating System. Use supported, recent software. If the software has been written 10 years ago, that fine if it’s still being updated. But an obsolete package could leave you vulnerable. Newer distributions tend to have better cryptography and better security tools. Don’t use a random repository. Check the build system. Perform full disk encryption like Luks. Why not /home? Well if you’re running things like containers, these will probably write to areas outside of /home. And who knows, you may have other customisations which an attacker can use. Veracrypt and Truecrypt are good tools. Protect from thief. Prevent coldboot attack – steal RAM. Beware of the “Evil Maid Attack”, in which an attacker installs a bootkit on an unattended computer, replacing the legitimate boot loader with one under his or her control. Typically the malware loader persists through the transition to protected mode when the kernel has loaded, and is thus able to subvert the kernel. Use Secureboot to ensure trust. Perhaps use TPM? Anti evilmaid can be complicated to setup. What about Anti Evil Made 2? TPM OTP. May be good? Not easy to use. FireWire DMA. Inception. Boot loader on a USB stick. LUKs. Self encrypted stick. USB is a bus so able to sniff traffic. Do not take random gifts. Filesystem bugs. USB stack bugs. USB guards. Hardware security can be depressing! Consider QubesO. See also Installing and Using Anti Evil Maid (AEM) with Qubes OS

Use strong password. Take human factors in account. Use a Password Manger but use a good password. Keep no data on laptop. Disable what you don’t need. Do not listen on network. Different users. Use VM or Vagrant. Containers maybe. But not secure. Virus scanners can be dangerous – a bad file can cause them to crash (they’re likely to be running as root!). Beware of IPv6 and shodan. Manager of NSA. Phishing. No open random attachment. Open in VM. Selinux-sandbox. Firejail, etc. No Docker. Selinux on desktop. MCS policy. Contained user. XDG-apps. Firefox. Remove flash. Block java. Enable when needed. Block multimedia content. Disable webrtc and network access. Httpseverywhere. No script. Cert patrol. Some people remove all CA. Rowhammer.js . Privacy? Mass surveillance. Exploit in adverts. Ad block. Cookiemonster. Local attack. Use screensaver with password. Lock on idle. Use TMOUT to protect tty. Never leave root shell open. Sudo credential authorisation require password again. Disable ptrace or use selinux boolean to disable it. Use SSH keys rather than passwords for login. Do, however, put a password on the key. Use an SSH agent. However, don’t use agent forwarding – a remote admin can take advantage. Use one key per device (laptop1, laptop2, etc) and change key on a regular basis. Automate the key changing. Store they key on smartcard. Yubikey. Audit all the time and store the audit on different server. Make it hard/slow to delete log files (eg issue a command today to delete the logs, but the actual deletion will occur many days later). Machine learning to track sysadmins. Can be a little creepy but it can track unusual behaviour. Used at Facebook and Google. Learns that a sysadmin that has unusually logged into multiple servers and flags this. Use Aide/tripwire on servers. However, not that this can be poor for Fedora due to frequent updates. Use ostree. Logwatch on laptop. For example, an application may be crashing regularly as a result of an exploit someone is trying to take advantage of. After crashing the application 10 times they may get root!

Automated Infrastructure Testing with Oh-My-Vagrant and the CentOS CI by James Shubin.

This was a very entertaining, lively and informative presentation! It included fire! James began by giving a quick docker overview in that docker is just a process on the computer and vagrant gives you a way to spin up virtual machines really easily. Vagrant allows you to quickly setup things like a puppetmaster or ansible server so that you can test out code before it hits production. Oh-My-Vagrant makes it easier and faster to deploy a cluster of virtual machines and containers than with Vagrant alone. With a simple yaml file you can setup a suite of servers. James also introduced a whole suite of tools which can save a lot of time. He also demonstrated hooks into subscription-manager, allowing a container to start, register with Red Hat, perform the necessary tests, and then unregister when the container is destroyed (via vdestroy). In addition, he showed that some images may have limited disk size, so his RHEL script will grow the filesystem (xfs grow) when they are created. See Oh-My-Vagrant examples on github. He is working on a new config management tool called mgmt which he’s presenting next week.

Links:
James Shubin blog
James Subin on Twitter
James Shubin on Github

Path from Software Collections to Containers for OpenShift by Honza Horak.


This presentation was aimed more at developers than administrators, but it gave some very useful advice for creating containers. For example, if you want to keep the container size down, always run a “yum clean all” as part of the dockerfile. If installing software, use “yum –setopt=tsflags=nodocs” so as not to include documentation and remember not to run your docker instances as root. Honza also referred to the Nulecule standard way of defining multi container applications configuration.

Getting started with kubernetes by Sebastien Goasguen.


Sebastien was the author of the Docker Cookbook. Sebastien went through a brief history of orchestration technologies including https://en.wikipedia.org/wiki/HTCondor. Essentially, Linux containers have arrived and docker makes them easy to develop. Google gave us cgroups which allows us to isolate processes from one another. What we’re now seeing is a battle for the Orchestrator role: Mesos, Rancher, HTCondor, Kubernetes. Essentially all orchestrator’s work roughly the same way – there is an Admin node, a datastore for persistence, an agent running on the node. Yes, you can now use Kubernetes to run batch jobs just like 25 years ago! However, you can also do a lot more. In choosing an orchestrator, why use Kubernetes? Google got a lot of experience and learned many lessons from Borg. These lessons make Kubernetes very valuable to the Open Source community. If manager fails, pods keep on running. Technically master can restart and recover if data store was stored externally. The main primitives in Kubernetes are pods, replication controller and services. Atomic is pre-packaged with a dockers engine, Kubernetes systemd units, flannel systemd units which makes it a very good choice of foundation for starting out on the Kubernetes journey.

Atomic Developer Bundle – Containerized development made easy by Navid Shaikh & Brian Exelbierd.


The ADB is a vagrant stack which allows developers to work in a familiar environment. For example, they gave an example of “Command line Carl” who is a developer who primarily uses the command line for this work. In this scenario, the pre-built vagrant instance starts up and gives a whole suite of tools ready for development. The docker daemon runs in the instance and the whole things protected by TLS. Similarly, if a developer uses an IDE such as Eclipe, they can just run “vagrant adbinfo” and use a Docker plugin for eclipse. Why does the ADB chose to work with Centos? They like it because they can get valuable feedback from the community and they can make rich use of the software the other special interest groups product. They hold regular meetings and in the future they want to add in more support for additional hypervisors and orchestrators.

Link: Atomic Developer Bundle (ADB) on github

That’s about it! It was a great event that I’ll hopefully return to. I highly recommend it!

Puppet Camp London 2015

Today I attended my third Puppet Camp – Puppet Camp London Spring 2015 – at The Mermaid Conference Center in Blackfriars, London. The event was really informative and I thought it was worth posting up some notes and links about the day.

The auditorium was really nice with plenty of room, very clear sound and a nice clear cinema-sized screen.  Some smaller tech events I’ve been to recently weren’t as good as this, so well done to the organisers and many thanks to sponsors Solidfire, Pagerduty and Speerhead.

Here’s a list of the talks:

Puppet Keynote by Gareth Rushgrove of Puppetlabs. Twitter: @garethr. Gareth gave a great overview of Puppet and Configuration Management (ideal for some members of the audience who were looking at Puppet for the first time) and then spoke about the new features in Puppet 4.0. The key take away was how the newer, best practice methods of managing servers has really evolved over the last 10-20 years. He pointed to the 2014 DevOps Report in which servers and infrastructure are now more reliable than ever, but teams also need to be more agile (more releases, more often) in order to beat their competition.  A quick show of hands in the audience showed how configuration management is used by both traditional developers and traditional sysadmins in equal measure.

 

Why Puppet? Why now? by David Mytton, Server Density. Twitter: @davidmytton An interesting talk from the point of view of someone who started building a business using a small number of hand-crafted servers and quickly realised the importance of using Configuration Management to scale out the infrastructure in a consistent way.

Helping Data Teams with Puppet by Sergii Khomenko, STYLIGHT. Twitter: @lc0d3r

Autosigning Certificates with Time-based One Time Passwords by David Ellis, TIM Group. (David Ellis, TIM Group, Developer Blog)

Puppet and Your Metadata by Marc Cluet, Ukon Cherry. Twitter: @lynxman. Marc was a really good presenter and gave a great overview of the different types of metadata that is available: Structural Metadata like IP Addresses, Architecture (things that are typically set and cannot be changed) and Descriptive Metadata like $puppetver, $apachever (things that you typically want to set on a host).  Marc made two great references which I’ll look into for storing sensitive information: hiera_gpg and hiera_eyaml (hiera_eyaml on github). When automating metadata there are 4 sources to look for: Provisioning, Puppet, Monitoring, Services. Marc touched on Consul for data discovery and there was more on that in a later talk. Another great point he made was to make sure that any custom facts you create in puppet must be returned in a timely manner. Consider it might take 20 seconds to generate a dynamic fact.   Scale that out over hundreds of servers and hundreds of puppet runs and you can see that is wasteful. For any custom facts that may take time to generate (say, running an SQL query to set some custom stats) run them as a cron job and simply store the data in a text file in /etc/facter/facts.d/

Slides for Marc’s talk can be found here: Puppet and Your Metadata

Puppet Demo by Steven Thwaites, Puppet Labs

Puppet Contained by Owen Ben Davies, Big Sofa.  Personal Homepage: Owen Ben Davies UK. Slides: Puppet Contained Ben covered how developers can use Vagrant and Docker together in order to set up and test applications that are representative of production environments.  One point around Docker images was discussed at the end of the presentation which was when to use Puppet – during the creation of a container, or during the runtime of a container?  As containers are small and made up of small deltas, there is an argument to say you should use puppet on the build side for generating images, and then use Service Discovery with Puppet when you the images are being run.  Do the work once up front.

The last presentation was meant to be Puppet Performance Profiling (Intermediate) – R.I.Pienaar but unfortunately RI couldn’t make the conference.

Instead, Gareth Rushgrove gave a talk and demo titled Service Discovery and Configuration Management – Two Speeds of Configuration.  The slides for Gareth’s talk can be found at here: Service Discovery and Configuration Management.  The idea here is that we can use Service Discovery with puppet to dynamically configure our infrastructure.  A number of different service discovery tools exist already such as etcd, Consul and Zookeeper.  They’re all use in well known projects (etcd by CoreOS, Cloud Foundry and Kubernetes; Hadoop used by Zookeeper; SocketPlane and Cloud Foundry use Consul).  In Gareth’s example, he demonstrated Consul with puppet, triggering a puppet run when changes were made to the running services.  The puppet run creates a dynamic configuration file (in this case NGINX) pointing at the active applications which were discovered by Consul.  So, stopping an application on one node triggers an immediate puppet run on a server which relies on it.  Within in a few seconds web requests are redirected only to applications running on different servers.  The code for the module can be found at lynxman/hiera_consul on github.

Overall, this was a day well spent with plenty of ideas to make better use of puppet.

Oh, and thanks to Pagerduty for the hat, Puppetlabs for the T Shirt and Stickers, and Solidfire for the socks!

Puppet Camp London 2015

Puppet Camp London 2013

Today I attended Puppet Camp London at Mary Ward House. As with the previous London Puppet Camp in March, attendance was very good with something for everyone who’s using Puppet.

Here’s a quick link to the presentations and/or speaker profiles:

  • Puppet Enterprise Demo – Puppet Labs

Update 8/Dec/2013: there’s now a blog post on the Puppet Labs website about the event – Mind the Gap at Puppet Camp London so I’ve added in the video presentations in this article.

Fedora Kickstart DNS Dependencies

In the previous post we added the updates and fedora repositories to our kickstart file. That should mean the packages are pulled down from the Internet if they can’t be found on the install media (those in the “url” statement).

When I first tried this on my home installation, I was still getting the errors about being unable to find some software packages. A look on the install consoles (ALT+F1,F2,F3, etc.) showed the repositories were being disabled for fedora and updates. Initially I couldn’t figure out why. Then it dawned on me. I was using a temporary DHCP server on a Centos 6 provisioning host, and my options were as follows:


option space PXE;
option PXE.mtftp-ip code 1 = ip-address;
option PXE.mtftp-cport code 2 = unsigned integer 16;
option PXE.mtftp-sport code 3 = unsigned integer 16;
option PXE.mtftp-tmout code 4 = unsigned integer 8;
option PXE.mtftp-delay code 5 = unsigned integer 8;
option arch code 93 = unsigned integer 16; # RFC4578
deny unknown-clients;
subnet 192.168.105.0 netmask 255.255.255.0 {
option routers 192.168.105.1;
range 192.168.105.200 192.168.105.240;

Note that my DHCP server wasn’t returning any DNS servers as my client booted. As such, when the Fedora install started it had no concept of DNS servers to use.  So, I added the line below so that my Fedora install would use the Google public DNS servers (you could replace this with your own DNS servers or that of your ISP):


option domain-name-servers 8.8.8.8;

The installer was now able to use the fedora and updates repositories to pull down the required packages.

Lesson learnt: make sure that you supply a valid DNS server on your clients when using the kickstart file.

Fedora Kickstart Installation Sources

The previous post showed the kickstart file generated using a minimal installation on my Thinkpad W530.  It’s this base kickstart file which we’ll update and customise, in much the same way as we would do if we were working on the target machine.

I typically install the following packages via yum:

sysstat
conky
autofs
simple-mtpfs
critical-path-kde

Apart from conky and simple-mtpfs all of those applications are fairly generic.  As such I was hoping that they would be available on the Fedora installation DVD.  So, I first updated the packages section of the kickstart file like this:


%packages
@core
sysstat
conky
autofs
simple-mtpfs
@critical-path-kde

However, this kickstart file resulted in errors stating that the packages could not be found.

On an installed Fedora 19 system, I could see these packages came from either the “updates” repository or from a repository called “fedora”.


# yum list sysstat conky autofs simple-mtpfs
autofs.x86_64        1:5.0.7-28.fc19                @updates
conky.x86_64         1.9.0-4.20121101gitbfaa84.fc19 @fedora
simple-mtpfs.x86_64  0.1-6.fc19                     @fedora
sysstat.x86_64       10.1.5-1.fc19                  @fedora

I then came across the following links:

Anaconda/Kickstart – repo usage

Red Hat Bugzilla 979154 – Fedora 19 RC2 kickstart with “repo –name=fedora” crashes

Fedora 19 Common Bugs – Problems with Installation Source and Installation Destination spokes when installing from a partially complete kickstart

This first link states that “By default, anaconda has a configured set of repos taken from /etc/anaconda.repos.d plus a special Installation Repo in the case of a media install. The exact set of repos in this directory changes from release to release and cannot be listed here. There will likely always be a repo named “updates”.

I had another look on the DVD and sure enough those packages were not listed. So, what I actually needed to do was enable these extra repositories in the kickstart file.

Here’s what the updated sections of the kickstart file will look like:


# Use network installation
url --url="http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD"
repo --name=fedora-kickstart --baseurl=http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD
# Need fedora so we can pull down things like sysstat
repo --name=fedora
# Use this to get full updates
repo --name=updates

The updated kickstart file will cause the installer to use these extra repositories and use them when it gets to the %packages section of the kickstart file.  In this manner, it will also pull down updates to the O/S from the Internet using the “updates” repository.

Ultimate Fedora Kickstart

I recently decided to re-install Fedora19 on my Thinkpad W530. I thought it would be worthwhile documenting the steps and using a kickstart server (in this case running Centos 6) to be able to replicate the build in the future – for example when Fedora 20 is released – and for kickstarting other devices. Sure, it’s now possible to upgrade Fedora between releases using FedUp, but if all of your personal data is on a separate (backed up) partition then a clean, custom install will give you a fresh start and make sure no old configuration files or packages are left behind.  If you document your customisations via a kickstart file, it means the headaches of the re-install can be minimal.  In fact, everything you would do on the command line post-install can be done via a kickstart file.  Another advantage is that should a newer filesystem type come along you can simply reformat your O/S partition to this new type.

The next couple of posts will document some of the steps in creating the kickstart file and will cover:

Fedora Kickstart – Installation Sources
Fedora Kickstart – DNS Dependences
Fedora Kickstart – Ultra Minimal KDE Installation
Fedora Kickstart – Additional Repositories
Fedora Kickstart – Thinkpad W530 add-ons
Fedora Kickstart – Post-Installation Tasks

To begin this process, I first installed Fedora 19 by hand and chose minimal as the default installation. This gave me a /root/anaconda-ks.cfg kickstart file from which we can work.

It will look something like this:


#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use network installation
url --url="http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD"
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
# old format: keyboard us
# new format:
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=eth0 --noipv6 --activate
network --hostname=localhost.localdomain
# Root password
rootpw --iscrypted XXX
# System timezone
timezone Europe/London
user --groups=wheel --homedir=/home/user --name=user --password=XXX "User"
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part /boot --fstype="ext4" --onpart=sda6 --label=fedora19-boot
part / --fstype="ext4" --onpart=sda7 --label=fedora19-root
part swap --fstype="swap" --noformat --onpart=sda8
part /boot/efi --fstype="efi" --noformat --onpart=sda2 --fsoptions="umask=0077,shortname=winnt"
%packages
@core
%end

For clarity, the ‘url’ shown above would refer to an internal apache webserver from which the Fedora DVD is shared.

Whilst the final kickstart file won’t be for everyone, I’ve called it the ‘Ultimate Fedora Kickstart‘ because it’s the ultimate for my needs. No doubt, you’ll have your own version of this 🙂

Using flock to ensure only one instance of a script can run

Whilst browsing, I came across the following post from Randall Schwartz of Perl and FLOSS Weekly fame.

“flock” is one of those utilities I’ve not used very much, but if you want to create a script and ensure that only a single instance of it can run at any one time then this is a really neat utility. No lock or PID files to mess with, no “ps -ef | grep” type of scripting to incorporate.


#!/bin/sh
(
if ! flock -n -x 200
then
echo "$$ cannot get flock"
exit 0
fi
echo "$$ start"
sleep 10 # real work would be here
echo "$$ end"
) 200< $0

One to file away for future use :)

Puppet Camp 2013

Yesterday I attended Puppet Camp London 2013 at Somerset House. It was an interesting day with a lot of good talks and demonstrations.  In this article, I’ll attempt to link to all of the speakers and slides from the event and describe what I found interesting.  The day was sponsored by Red Hat and Quru.

The began with Dawn Foster, Community Manager at Puppet Labs, introducing Puppet Labs CEO Luke Kanies.

  • State of Puppet: Luke Kanies – Puppet Labs CEO

State of Puppet detailed the history behind the creation of puppet, how things started and where they are now. It was apparent from the slides that there has been a large growth in puppet deployments, community and modules over the last 12 months. I especially enjoyed the point that the ‘old’ ways of doing upgrades – eg taking down services for a migration on a Friday evening, performing the required steps, and then starting things up again on Monday – just don’t work in today’s environment. We’re used to having IT available at all times – we want to access Internet Banking when we want to. We expect access to news, blogs and entertainment 24 hours a day. And we’re more likely to be running services that are available internationally, so the traditional ‘maintenance window’ is no more. Another important fact was that when puppet was created, there wasn’t much cloud deployment. Nowadays, it’s everywhere and having a tool like puppet to manage these instances is very useful. We even have VM’s being created and destroyed dynamically for just a single HTTP request. With Puppet, we can basically keep everything in ‘sync’ using a standard programming syntax rather than custom scripts. Luke explained that Puppet Labs began with an Open Source product, and made money by providing consultancy services to set this up. Nowadays, they’re keeping some of the features hidden away their Enterprise products. There’s nothing wrong with this, I just hope that the Open Source version with features that might overlap with Enterprise, such as Puppet Dashboard, don’t fall by the wayside. Other items mentioned in the presentation include Puppet DB (which tracks the status and changes in your environment in a database) and plans for more configuration tools to push configurations to servers at specific times or under controlled conditions. There was also talk about the ability to add machine dependencies within Puppet, eg provision a database, but don’t start the webserver that talks to it until the database host has been fully provisioned. In terms of user base, Puppet has lots of clients including Barclays, FT and LSE in London, and Google, Cisco and HBO in the US. Plus many more. The size of deployments varies too, from managing just a few servers to managing tens of thousands.

The slides from the Luke’s talk can be found here: State of Puppet – London. Readers may also be interested in Chris Spence State Of Puppet slides featured on the Puppet Camp Barcelona Wrap Up blog post or the slides from the San Francisco Puppet Camp – State of Puppet – San Francisco 2013.

  • Building reusable modules: Jon Topper – Scale Factory

All of the talks were interesting, but this is the one where I can start to reap immediate rewards. Firstly, it provided good ways of writing puppet modules, and there are definitely good take-aways from this such as writing puppet modules that perform very small, discrete pieces of work. Dependencies between puppet classes is also a bad idea. RSPEC Puppet, puppet parser and Puppet Lint are great tools for checking your code, although it was pointed out that puppet-lint can be very, very picky, so use with appropriate settings that work for you.

You can find more about Scale Factory from their website, whilst the slides from Jon’s presentation can be found here – Building Reusable Puppet Modules.

Jon’s Twitter profile is jtopper.

  • Automated OS and Application deployment using Razor and Puppet: Jonas Rosland – EMC

The slides that Jonas presented can be found at Puppet Camp London 2013 Puppet And Razor Jonas Rosland.

Razor is a provisioning system that can be used quickly provision new servers – both physical and virtual. The key thing is that it’s event driven rather than user driven. In the demo, Jonas configured Razor to provision certain types of servers depending on certain conditions. The example used physical RAM to determine what type of Operating System should be installed when a server is PXE booted, but you can use it on any kind of variables that you get from factor. I’m not sure how this would work in remote sites where you don’t have a PXE server. The install of Razor looks very straightforward.

Other tools worth looking at are: The Foreman, Cobbler, vSphere Auto Deploy

Jonas has some useful links on his pureVirtual website: Puppet and Razor.

Jonas’s Twitter profile is virtualswede

  • De-centralise and Conquer: Masterless Puppet in a dynamic environment: Sam Bashton – Bashton Ltd.

The slides that Sam presented can be found at Decentralise And Conquer Masterless Puppet In A Dynamic Environment.

This was a really interesting presentation. Essentially, Sam was building a set of RPM’s which can then be deployed to the target servers via Pulp. Puppet then runs locally on the remote target, triggered from a postinstall command in the RPM package. There’s no central puppetmaster in this setup, so no single point of failure.

Sam’s Twitter profile is bashtoni

  • Building self-service on demand infrastructure with Puppet and VMware: Cody Herriges – Puppet Labs

Cody talked about the pros and cons about running your own infrastructure versus using hosted solutions such as Amazon. His slides can be found here – Building self-service on demand infrastructure with Puppet and VMware

  • Enterprise Cloud Management and Automation: John Hardy – Red Hat

John presented ManageIQ. This clever piece of software interrogates your SAN arrays and discovers the Virtual Machines that are installed there. It can then look into these machines to determine what’s running, what files are installed, record changes on these files and perform full inventory control. It can even prevent a VM from being powered on if it violates a policy, such as not being an approved O/S. ManageIQ is being used by UBS and other big organisations. Red Hat acquired ManageIQ in December 2012, so expect to see this rolled into Red Hat products soon. Hopefully, much of it will become open source too.

  • Puppet Demos: Chris Spence – Puppet Labs

There was no slideshow from Chris, it was a hands-on demo showing how Hiera can simplify puppet code, how configuration files (such as a load balancer) can be dynamically generated as servers are powered up and powered down, and he showed some useful Puppet 3.0 commands.

Chris has written some puppet modules which can be found on Puppet Forge and has some useful material on his blog.

Chris’s Twitter profile is tophlammiepie

  • Closing thoughts

Overall, it was a good set of talks and great to talk other puppet users to discover how they are using it. I’ll certainly be using Hiera for deployments and I’m going to start using tests for my modules. In terms of contact with the Puppet community, I’ll definitely make use of ask.puppetlabs.com and puppet-users.

Finally, here’s a link to the official Puppet Camp London 2013 blog – Fun Times and Great Info at Puppet Camp London

Oh yes, and thanks for the post-camp drinks, T-Shirt and Hat! I look forward to Puppet Camp London 2014!

Red Hat Puppet

Red Hat Puppet

Display a future or past date in the bash shell

Here’s a quick and easy way to establish what the date will be in a specific number of days from today using the bash shell on Linux. Simply use the ‘-d’ option to the ‘date’ command.

Here’s the current timestamp:

-bash-3.2$ date
Thu Jan 17 15:04:28 GMT 2013

And this will be the date 60 days from now:

-bash-3.2$ date -d "now 60 days"
Mon Mar 18 15:04:31 GMT 2013

You can also use the same code to display dates from past. What was the date 94 days ago?

-bash-3.2$ date -d "now -94 days"
Mon Oct 15 15:07:35 GMT 2012

To get the last calendar day of the previous month:

date -d "-$(date +%d) day" +%Y-%m-%d

(Display the date days ago. So on 17 January, 17 days ago would be 31 December)

Thinkpad W530, Red Hat Enterprise Linux 6, Fedora and Windows 8 Multiboot

Now that we’ve successfully done a clean Windows 8 install on the W530 and got it dual booting with Fedora 17, it’s now time to add another distribution onto the laptop – Red Hat Enterprise Linux 6.

My first attempts to install RHEL 6.3 onto the W530 resulted in the graphics failing to load by the installer. This resulted in the screen displaying a strobing set of psychedelic colours. A few Red Hat Knowledge-base articles which might be relevant:

RHEL6 does not boot on Lenovo W520 Laptop with Discrete option selected to choose nVidia GPU

Blank screen during installation when using certain NVIDIA Quadro Graphics Adapters under Red Hat Enterprise Linux 6

Why won’t the Nvidia driver compile/install/load under Red Hat Enterprise Linux 6

I don’t recall exactly what settings I initially tried in the BIOS for display type which offers the following options:

  • “Integrated” – uses built-in Intel Integrated Graphics Controller”
  • “Discrete” – uses nVidia Graphics
  • “nVidia Optimus” – uses the built-in Intel Integrated Graphics Controller but allows the OS to use nVidia when needed (supported only with Windows 7 and Window 8)

Anyhow, I attempted to install with the following options on the kernel command line:

xdriver=vesa nomodeset

From what I can tell, that should have allowed the installer’s X Server to start successfully, but it did not. However, the installer helpfully told me that I could use a VNC client to perform the Red Hat install.

I told the installer to select /boot/efi as the EFI install partition, the one shared with Windows 8 and Fedora.

After the install, I was given the option to start Windows 8 or Red Hat Linux. The Fedora choice was no longer listed. Fortunately, there was a backup /boot/efi/EFI/redhat/grub.conf.rpmsave which contained the Fedora/Windows 8 option. It’s now just a case of merging the Red Hat and Fedora 8 grub files.

Here’s the result:

boot=/dev/sda2
device (hd0,5) HD(2,96800,32000,ad8e8d71-db62-4c7a-8603-5bc6ce875d52)
default=0
timeout=7
splashimage=(hd0,5)/grub/splash.xpm.gz
hiddenmenu
 title Fedora (3.6.11-1.fc17.x86_64)
  root (hd0,5)
  kernel /vmlinuz-3.6.11-1.fc17.x86_64 rd.md=0 rd.lvm=0 rd.dm=0 KEYTABLE=us SYSFONT=True rd.luks=0 root=UUID=a6f32b89-45ac-410f-8a1e-562b441304e3 ro LANG=en_US.UTF-8 rhgb quiet
  initrd /initramfs-3.6.11-1.fc17.x86_64.img
 title Windows 8
  set root=(hd0,gpt1)
  chainloader /EFI/Microsoft/Boot/bootmgfw.efi
 title Red Hat Enterprise Linux (2.6.32-279.el6.x86_64)
  root (hd0,8)
  kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=UUID=fe96af7b-07bd-451e-b4de-4eec673f4cca nomodeset rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM rd_NO_DM rhgb quiet xdriver=vesa
  initrd /initramfs-2.6.32-279.el6.x86_64.img
 title Windows 8
  rootnoverify (hd0,3)
  chainloader +1
title Fedora
  rootnoverify (hd0,6)
  chainloader +1