Resources for Programming Merit Badge Counselors

Since The Programming Merit Badge is a brand new badge, and it was hard enough to find resources for the super general Computer Merit Badge, I’ve begun putting together a list of resources.

The first site that I would like to highlight is a very popular “programming chrestomathy” site called Rosetta Code.

Like the very famous Rosetta Stone has the same text written in three different languages (Ancient Egyptian hieroglyphs, the middle portion Demotic script, and the lowest Ancient Greek.), The Rosetta Code site seeks to have solutions for many interesting programming challenges using as many languages as possible.

There is a great benefit in seeing the same programs written side-by-side in different languages. It allows one to leverage their knowledge of one languages to learn another language. It’s also beneficial for learning the various algorithms and learning some little known features of languages with which they may even feel comfortable.

The next entry will be an article will be for Computer and Programming Merit Badge counselors and how to go about teaching programming in a Scouting environment.

Posted in Instruction and Facilitation, The Computer Merit Badge, The Programming Merit Badge | Leave a comment

Introducing the Programming Merit Badge

The long anticipated Programming Merit Badge has been released,

The requirements are fairly comprehensive, and provide much deserved attention to the discipline of programming – something that The Computer Merit Badge was not able to do.

Also noted at the Merit Badge Calendar, there are several computer related Merit Badges slated to be released in the coming year:

  • Advanced Computing
  • Computer Aided Design
  • Multi-Media
  • Signs, Signals, Codes (yes, I’d argue computer related)
And finally, it seems that The Computer Merit Badge will be getting replaced by the “Digital Technology” Badge.  It is my intention to make this blog a resource for anyone interested in learning about or contributing to the knowledge base of all “computer” related badges.
Please join us at our Facebook group, The Computer Merit Badge for discussions about all related topics.


Posted in Site News and Announcements, The Computer Merit Badge | Leave a comment

Learning Resources, Round Up 11-27-2012 discusses the learning site, also lists a few sites to learn programming from your browser.

Two more that I can think of are:

Posted in Uncategorized | Leave a comment Blog is Looking For Guest Bloggers

If you are a Scouter, Scout, CMB counselor, or blog fan and have something interesting to share about computers or the instruction of an area of them to Scouts, please let me know so we can discuss your contribution to the site.

Posted in Uncategorized | Leave a comment

World’s Oldest Working Digital Computer

WITCH was first designed in 1941, and became operational in 1951. This article discusses the three-year restoration project at The National Museum of Computing to bring back WITCH.

Posted in History of Computing Machinery | Leave a comment

What is the best parallel programming language for initiating students in the world of multicore/parallel computing?

You’re doing them a disservice if you don’t eventually have them use C/C++/Fortran with both MPI and OpenMP for message passing and SMP. My personal opinion on teaching programming concepts, however, is that you remain language neutral for as long as possible. You can teach the message passing concepts and shared memory programming without clouding the main ideas with syntactical sugar. Once the basics are in place and they understand the nuances between message passing and shared memory, you can introduce the actual software tools that can make this happen. Once these concepts take hold, you can introduce them to hybrid MPI/OpenMP, GPGPU, and PGAS concepts. Two non-mainstream languages that I can suggest are Cray’s Chapel; it’s very interesting, particularly if you wish to study the “compiler” and runtime. Another language that I really adore is an open source language called Qore. It’s a scripting language built with SMP scalability in mind.

Also, “Shared Memory Consistency Models: A Tutorial”, IEEE Computer, 29(12), pp.66-76, December 1996 is a nice introduction for the topic of memory models in shared memory programming.


Posted in Instruction and Facilitation, Programming Languages | Leave a comment

A Tour of Go

Go is a modern, compiled language developed primarily through the efforts of Google. and some of the folks who originally brought you Unix and it’s successor, Plan 9.  There are many very cool modern features, and it has a very strong pedigree.

I am not going to be listing out the features or cool parts of the language, but I wanted to share that there is a webpage used for providing a tour of features. Especially if you’re familiar with C, the examples and descriptions will be very interesting to you.

Other than Go, there are 2 other languages that I keep wanting to learn and find a use for – D and Lua. I may write up more about all 3 (plus Perl, of course) at a future time.

Posted in Online Resources, Programming Languages | Leave a comment

Some Tips for Teaching a Programming Class

I hope to have more resources and discussions here. It has been way too long since my last post, and I have so much I want to share.

Please enjoy the following link that highlights 12 tips for teaching programming. There are tons of resources out there, but it’s still very difficult to know how to best teach programming.

One of the best ideas I have ever heard (not mentioned in the link above), is that when teaching how to program you should focus on programming constructs and not a particular language.  Teach in psuedo-code (e.g., generic loops, conditionals, etc). Have the participants hand write small programs using these constructs.  Then, as a final endeavor, ask them to write a program in a real language using only a language reference that describes the (hopefully) familiar language constructs for that particular language.

Posted in Instruction and Facilitation, Online Resources | Leave a comment

What is Virtual Box? You want this.

Few pieces of software excite me with possibilities as Virtual Box. It offers many potential uses – especially for students, recipients, and counselors of The Computer Merit Badge.

Virtual Box is a cross-platform program that will let you run multiple operating systems on your real computer at the same time, without having to reboot. With computer-core counts increasing, virtualization is a great way to take advantage of these resources.

Virtual Box is cross-platform, meaning that it can run on a multitude of popular operation systems, including: Windows, MacOSX, and the various free *nixes (e.g., Linux, FreeBSD).

It is distributed free of charge by the Oracle Corporation, which acquired it from Sun Micro-systems. Sun (which also had owned OpenOffice and MySQL during the time of acquisition), obtained ownership of it by purchasing a small software company called Innotek GmbH.

Okay, so what is it?

Virtual Box is a program that allows its users to create and manage virtual machines (computers) on their own desktop computer. The computer on which Virtual Box is installed is called the Host. Virtual machines that are created within Virtual Box are called Guests.

What can you do with a Virtual Machine?

Once you’ve created and configured your virtual machine, you may do anything with it that you would do with a real computer. The catch is that the virtual machines when first created do not have an operating system on them. If you can install an operating system on a real computer, installing an operating system on a newly minted virtual machine is nearly an identical process. Specific “how-to” articles and videos on Virtual Box are common on the Internet, so I am not going to spend any time on this here. The following is an example of the kind of resources people have spent time creating in order to help others use Virtual Box effectively:

There are also incredibly HELPFUL projects out there, which provide virtual machines already configured. For example, VirtualBSD, is geared toward providing VMWare and Virtual Box compatible FreeBSD images – great for learning and everything else outlined below!

The remainder of this article will highlight some creative uses for virtual machines.

Self-paced Learning

1. Learn to install and configure other operating systems

Don’t have an extra computer to try out that new version of Windows 7 or Redhat Linux? Don’t feel like messing with disk partitioning software (like gparted) to create a many-boot environment? Would you like to have access to many different operating systems at once, without having to reboot into each at a time? Virtual Box might just be for you.

Virtual Box is ideal for evaluating and experimenting with operating systems from the comfort of a familiar environment. You can create and destroy virtual machines with the click of a mouse. Screwed up an install or configuration? No problem – just create a new machine and start over.

2. Experiment with different programming environments

With different operating systems, come different programming environments. For example, Unix and related operating systems (Linux, FreeBSD), provide for some very nice low level C programming options. They also are ideal for learning to program websites since they come with languages like Perl and Python already installed (typically).

Future postings will be dedicated to quickly and easily setting up programming environments for learning. Virtual Box will definitely be in the toolbox for doing this the right way.

Teaching and Training

3. Provide pre-built training environments

Oracle and many other institutions have embraced the use of pre-built, virtual environments for training.

Being able to provide pre-built environments is particularly useful in non-traditional technical training venues, like Scout Camps. As a Computer Merit Badge counselor teaching the CMB in a semi-primitive camp setting, I can remember remember the pains required to set up computer labs in the middle of the woods (no joke!). No more lugging old equipment or late nights trying to recreate the Internet among a set of old, poorly configured computers running inferior operating systems. Assuming a Scout can bring along a laptop (a stretch, even today), then it is fairly simple to ensure that all tools and environments required for instruction and testing are available to everyone.

Virtual Box can’t help in totally primitive settings, but future posts will cover how to effectively teach The Computer Merit Badge “unplugged“.

4. Distribute training images, pre-configured for self-paced learning

As pointed out above, the parent company that now owns Virtual Box distributes pre-configured virtual machines with their other products for training purposes. Not only can these pre-built environments aid in person instruction, they can also greatly enhance learning resources that are meant for self-guided and self-paced learning. No longer do people actually have to install and properly configure the software they wish to learn. They just download the virtual machine and start it under Virtual Box – one major hurdle to learning what you really want to learn is out of the way!

This means that anyone can develop a set of lessons and tutorials based on assuming a computing environment. There is an upfront cost to creating these Virtual Box images for distribution, but once done, anyone using your learning resources can use them on any Host operating system Virtual Box supports. You can serve everyone with your lessons now, no matter what operating system they have. This is a HUGE win.

Development and Testing

5. Create a development environment identical to your production set up (e.g., web development)

Virtual Box is also very handy for setting up development environments that mirror the targeted production platform. For example, experience Web Developers will typically demand a three-tiered environment. The first, and most important tier is the production tier – it’s the web server(s) that enables the web site or application that everyone sees.  This tier gets updated only when all new software has been thoroughly tested and QA’d.

The second tier is for testing, and it’s supposed to perfectly mirror (as perfectly as possible) the production tier. This is to ensure that any changes made during development do not cause any problems. It’s meant to flesh out any issues that might disrupt the production site.  The third tier is for development, and represents the environment in which the programmer works day-to-day.

You typically will not be hosting a website on a virtual machine (although this conventional wisdom is quickly proving to be false with a surge in VPS and virtual machine-based hosting services out there), we can say that you’re in some way paying for the production environment (e.g., through a web hosting company).  You don’t want to also pay for separate environments to host your testing and development tiers.

This is where Virtual Box comes into play by providing a means to recreate the production environment pretty closely on your local machine.  You can develop in your virtual machine on your laptop – you don’t even need a working internet connection since Virtual Box allows you to emulate the workflow associate with connecting to remote machines.

6. Test your programs on a variety of operating systems and environments

Virtual Box is also able to complement efforts to test the software that results from your development activities. You can have one set of virtual machines that is meant specifically for development and one (much more diverse set) of images that are meant for testing and quality assurance before releasing software.

This isn’t just useful for web development. For example, you can create Windows-based programs and test them in Windows Vista, XP, and 7 – all without rebooting your computer (and provided you have valid installation disks :).

Security and Privacy

7. Create disposable virtual machines

Because virtual machines can be created and discarded so easily, they provide the prefect environment for compartmentalizing (or sand-boxing) potentially risky activities into well defined – and deletable – virtual partitions. Be forewarned, however, even though it’s practically impossible for viruses that have infected one virtual machine to infect others or the Host computer itself, the virtual machine will still be susceptible to acting like any infected machine. For this reason, it’s important to ensure that you’re still diligent in order to prevent cross-machine infection.

8. Compartmentalize privacy and security concerns

The privacy conscious may also find virtual machines useful for created self-contained privacy zones. Instead of doing sensitive work inside of the Host machine, you are able to set up virtual machines that are able to act as secure and portable working environments. Be careful, though. A compromised Host machine might allow for these virtual machines to be downloaded directly, then anyone with Virtual Box installed might be able to start an instance of your virtual machine on their computer. If they’re serious about seeing what you’ve been up to, it won’t take them long to break in.

9. Safeguard your kids from your computer and your computer from your kids

One final use (of many) is that a concerned parent may create a virtual machine that is just for the children’s use. Virtual Box offers one to control network settings pretty easily, though this isn’t the ideal solution for protecting your kids online (though we’ll cover some strategies in the future). It might, however, protect YOUR computer from being overrun with Hello Kitty viruses. Note, it’s not hard to escape out of these virtual images, so a diligent child with half a brain might one day figure out how to get to the Host operating system. Online safety ideas will be the subject of future posts.


The one caveat is that performance will be an issue. Virtual machines are like parasites on your host computer. Each Virtual Box instance as a single, super-resource hungry process. The Host has no idea a full OS is being virtualized, but this single process is consuming memory, disk I/O, CPU, and networking on the real machine. Because of this, it’s best to have a Host computer with as many cores as possible so that each OS (including the Host) has it’s own dedicated CPU. This won’t help with demands on networking or I/O, but you can also control what each virtual machine is doing.  Note that properly coordinating resources between a Host operation system and the virtual machines on it is still the subject of ongoing research for those furthering virtual machine technologies.


Virtualization is a very interesting and useful technique for taking advantage of today’s many-core computers. Virtual Box provides nearly enterprise-level virtual machine capabilities for FREE. In doing so, there are a multitude of ways virtual machines can be leveraged. The major areas include: Personal learning, Teaching, Software Development, and enhancing Privacy and Security.

Finally, I’d like to claim that Virtual Box and other emulation and virtualization technologies are most certainly part of the history of computing machinery. The idea of providing virtual machines has long been a goal, and with programs like this, we’re one step closer to the future of computing.

Future posts will explore each of these topics in-depth. Until then, I hope that you’ll be inspired to check out Virtual Box and find more interesting uses for it. Topics will include more advanced uses for Virtual Box (e.g., network multiple virtual machines on a single Host), experimenting with different programming languages and environments, and highlighting interesting “hobby” operating systems, such as the free ReactOS (a Windows XP/2003 compatible clone).

Posted in History of Computing Machinery, Missing Requirements, Online Resources, Operating Systems, Programming Languages, The Computer Merit Badge | Leave a comment

Burroughs Large Systems

The Burroughs line of computers, first introduced in 1961 and is considered the first of the “3rd generation” systems [1].  This generation of systems is hallmarked by the use of integrated circuits, peripheral interfaces (keyboards, monitors), and their support for time sharing operating systems.  Many of the features introduced in this line of computers,initially criticized as being overly complex, are still considered new and innovative.

The B5000 was designed with high level language support in mind, especially for “block structured” languages such as ALGOL and COBOL [1].  Support for FORTRAN and APL, which required support for repetitive arithmetic operations on arrays and vectors, was added in the B6700/B7700 line.

Notable features introduced in the original line include a hardware managed stack and the use of descriptors for data access.  This allowed for the introduction of virtual memory as well as support for multiprogramming and multiprocessing [1].  For data security, code and data were distinguished using a flag bit – the hardware would not execute data or alter code without first having this bit set properly – this made self modifying code impossible [1].  The following is a summary of the supported features and implementation details.

High level language support, as mentioned earlier was a driving design goal.  To facilitate this, the machine offered a hardware managed stack, which made compile writing easier.  It also supported the data descriptors.  Later incarnations of this line moved the descriptor bits outside of the 48 bit word [1].

As was the IBM/360, the hardware and the operating systems were designed with respect to one another.  The core of the OS was something called the Master Control Program [1].  It included special instructions that assisted memory management and processor control operations that facilitated multi-processor environments.  The one specified in [1], and one which I find particularly interesting, is the HEYU instruction.  More on this later.

The use of descriptor bits also helped easily implement virtual memory since it (eventually changed from 1 bit to 3) could be used to denote if a block of memory was an array or some other segment [1].

The instruction set was refined as time went on, but not very much.  By all accounts, it was a simple instruction set, and one that facilitated the taking of full advantage of the hardware.  The ability to tag data also reduced the number of instructions needed since the opt-code could distinguish between the likes of single and double precision.  For example, the B5000 had 2 ADD instructions (for single and double precision), but crafty use of the data tagging field allowed the B6000 to reduce this instruction to a single ADD.

Probably the single most “modern” features offered by this line is its support for multitasking and symmetric multi-processing (SMP).  In fact, it even supported concurrent programming in ALGOL and COBOL [1] – something that I think is still outside the realm of most programmers.  Some features supporting multitasking included: the read-lock instruction, which can be used for task synchronization, and the HEYU command which allows the interruption of all concurrent processes.

The IBM System/360 was innovative in its own right.  A show case for Brooks‘ management skills and of Amdahl‘s technical prowess and forward thinking, this system became a huge success because of the thoughtful analysis of customer needs – both on a practical and technological scale.  The system was designed from the ground up to be configurable in the small to large machine spectrum.  It was designed as a general purpose computer to meet the computing needs of nearly all applications – including scientific computing. Their analysis of what made a practical and productive computing system was extremely “Brooks-like”, particular with their focus of “answers-per-month” versus bits-per-microsecond.

Advanced concepts discussed in [2], include the following.

The development of computers as “families” of backward and forward compatible systems.  This appealed to the business decisions of their customers as well as the technical ability to add to an existing infrastructure to meet increasing and changing needs.

A general I/O systems was required – this was similar to the use of the desire for devices such as monitors and keyboards on the Burroughs line.  The observation was made that most of the cycles take up by the CPU had to do with system level demands – compiling, I/O, traversing data structures, etc [2].  These functions had to be made efficient.

Perhaps one of great concepts was that of the system supervising itself.  While the Burroughs line was able to create a reliable and self monitoring system, the 360 took this to a new level by stepping aside the need for manual interactions.  It was my impression that the 360 implemented this in a much more robust way that the Burroughs line.  It is also an area of research still pursued by IBM.  This also affected the need for redundant systems – disks, cpu’s, etc.

The system required a storage capacity greater than the 32,000 words.  It also recognized the need of applications that required floating point sizes of greater than 36 bits.

The paper went into some pretty fine detail about the implementation of the data types, character set used, and how the sub systems were coordinated to provide an efficient, general purpose machine.  With out a doubt the 360 was a technical success that served IBM’s customers very well.


Some key differences between the IBM 360 and the Burroughs line included:

1. Use of stack (Burroughs) versus named registers (360)
2. Support for high level languages versus general purpose applications
3. Differences on data and memory management
4. Differences in approaches regarding system reliability


Both systems are milestones in the development of useful, large scale computing systems.  The Burroughs like introduced features that are still considered new today.  In this day and age of general purpose hardware for general purpose operating systems, it is refreshing to read about a day and age when the hardware *and* the operating systems were developed with respect to one another.  It seems as if the days of getting the most out of your hardware by using smart design choices on the software and hardware level are currently gone.


[1] The Architecture of the Burroughs B5000 – 20 Years Later and Still Ahead of the Times?; Alastair J.W. Mayer
[2] The Architecture of the IBM System/360; G.M. Amdahl, G.A. Blaauw, F.P. Brooks, Jr.

Posted in History of Computing Machinery | Tagged , , , , | Leave a comment