Wednesday, May 30, 2012

Cad for linux (clone or near AutoCad)

Autocad is not ported on Linux but some apps exist.

Computer-aided design (CAD) is the use of computer technology for the process of design and design-documentation. But are there any good free CAD apps for Linux? Strangely, that is one of the questions we often receive in our mail. We will try to list not just the free CAD apps here, but also the non-free ones that works well under Linux(in no particular order).

Top CAD Apps for Linux

Bricscad is a CAD package developed by Bricsys, originally built using the IntelliCAD engine. We have featured Bricscad before in our featured listing of top commercial apps for Linux. Bricscad is among the few commercially supported CAD packages which runs on Linux. Bricscad is capable of most contemporary AutoCAD functionalities.
GPLed QCad

QCad is a CAD software package for 2D design and drafting. QCad professional edition is the non-free version with options for limited-time free trial. There is also a community edition of QCad which is licensed under GPL. You can download QCad community edition for free.

FreeCAD is a free and Open Source 3D CAD/CAE program, based on OpenCascade, QT and Python. It features key concepts like macro recording, workbenches, ability to run as server and dynamically loadable application extensions. Windows, Mac and Linux versions available.
variCAD for Linux

VariCAD is 3D/2D CAD program primarily meant for mechanical engineering design. VariCAD provides support for parameters and geometric constraints, tools for shells, pipelines, sheet metal unbending and crash tests, assembly support, mechanical part and symbol libraries, calculations, bills of materials, and more. VariCAD is a non-free, proprietary application with packages for Windows and Linux readily available.

Open CASCADE Technology is a software development platform for 3D surface and solid modeling, visualization, data exchange and rapid application development. Open CASCADE Technology is available for free download and is licensed under Open CASCADE Technology Public License which the developer claims to be 'LGPL-like with certain differences'. The Debian project considers the license to meet the Debian Free Software Guidelines and has accepted Open CASCADE into its main archive.
cycas 3D CAD
CYCAS is a 2D/3D CAD application for Windows and Linux which offers special elements and techniques for architectural design apart from normal CAD techniques. CYCAS enables intuitive and uncomplicated handling of 2D and 3D elements. It is non-free and proprietary.

CityEngine is a 3D modeling application specialized in the generation of three dimensional urban environmentsKey features of CityEngine include GIS/CAD Data Support, Dynamic City Layouts, Street Networks Patterns, Map-Controlled City Modeling, Industry-Standard 3D Formats and more. CityEngine is non-free and is available for Windows, Mac and Linux.

BRL-CAD is a cross-platform open source solid modeling system that includes interactive geometry editing, high-performance ray-tracing for rendering and geometric analysis, image and signal-processing tools, a system performance analysis benchmark suite, libraries for robust geometric representation, with more than 20 years of active development. BRL-CAD is free to download and is available for almost all platforms out there.
Edit: 3 Other CAD Apps for Linux Recommended by Our Readers
  • Draftsight - It was a really big mistake from our part for not including Draftsight in the first place. DraftSight is a free 2D CAD product developed by Dassault Systèmes. DraftSight lets users to create, edit and view DWG, DXF files easily. Specific packages for Ubuntu, Fedora, Mandriva and Suse available for free download.
  • LibreCAD is the free and Open Source personal CAD application for Windows, Mac and Linux. LibreCAD is among the very few truly community driven CAD app for Linux.

Sunday, May 27, 2012

'Father of the Internet' Warns Web Freedom Is Under Attack

The Hill
(05/21/12) Andrew Feinberg

Governments around the world are trying to use intellectual property and cybersecurity issues to control the Internet, says Google vice president and chief Internet evangelist Vint Cerf. "Political structures ... are often scared by the possibility that the general public might figure out that they don't want them in power," Cerf says. He speculates that the International Telecommunications Union will likely become the global Internet cop, and expects the group to try to lock in mandatory intellectual property protections as a backdoor for easy Web surveillance. The public should view even good-faith efforts at Internet policymaking skeptically because balancing freedom and security "isn't something that government alone is going to figure out," Cerf says. He is concerned about the U.S. Cybersecurity and Intelligence Protection Act passed by the House, because it does not offer enough limits on how information about cyberthreats would be used. Still, Cerf expresses optimism that resourceful engineers will find a way around hostile government attempts to restrict access.

View Full Article

Friday, May 25, 2012

Why Wall Street hate open source?

[Ma3bar Announcement] "Linux Essentials Certification" exam at UOB - June 5, 2012 - Free of charge

"Linux Essentials Certification" exam at UOB - June 5, 2012 - Free of charge
Free Linux Essentials Certification Exam
University of Balamand,
Al-Kurah, Lebanon
Tuesday June 5, 2012 - 9:30 AM
(Registration is open until 8:00 PM June 1, 2012)
(Beirut, Lebanon: April 19, 2012) The Linux Professional Institute - Middle East (LPI-ME:, issued a call for volunteers to participate in the first "beta" exams for the Linux Professionals Institute's (LPI: "Linux Essentials" program.

"Linux Essentials" is an innovative new program measuring foundational knowledge in Linux and Open Source Software and will be publicly released in June 2012.  "Beta" exams are offered free to volunteers who wish to assist LPI in its exam development process for the "Linux Essentials" program.

During the month of May and early June, there will be various sessions for volunteers to take "beta" exams which are offered free of charge.  This volunteer effort will enable LPI to measure the "Linux Essentials" exam for quality, accuracy and relevancy.  All interested individuals who do not hold an existing Linux certification are invited to participate--however seating is limited to ten per session.

Successful candidates who pass the exam will be awarded a "Linux Essentials Certificate of Achievement".  Alternatively candidates may choose to have their exam results deleted from their registration record. Final exam scores for the volunteer "beta" exams will be available in late June 2012.

Each "beta" session is for ninety minutes and consists of sixty questions. To participate exam volunteers must obtain an LPI ID at and register in advance for the Linux Essentials "beta" exam with their focal point.

Targeted at new technology users, the "Linux Essentials" program is set to be adopted by schools, universities, educational authorities, training centers and others commencing June 2012.

The Linux Professional Institute is globally supported by the IT industry, enterprise customers, community professionals, government entities and the educational community. LPI's certification program is supported by an affiliate network spanning five continents and is distributed worldwide in multiple languages at more than 7,000 testing locations. Since 1999, LPI has delivered over 300,000 exams and 100,000 LPIC certifications around the world.

Overview of the Exam

To secure the Certificate of Achievement in Linux Essentials, you should be able to demonstrate a(n):

Understands the basic concepts of processes, programs and the components of an Operating System.
Has a basic knowledge of computer hardware
Demonstrates a knowledge of Open Source Applications in the Workplace as they relate to Closed Source equivalents.
Understands navigation systems on a Linux Desktop and where to go for help.
Has a rudimentary ability to work on the command line and with files.
Can use a basic command line editor.
Detailed exam objectives and coverage are available at
Registration form
Reserve your seat in the exam on June 5 at the University of Balamand by filling and submitting the form below. For further inquiries, please contact us at
Registration is open until 8:00 PM June 1, 2012
Ma3bar Announcement

Unsubscribe from this newsletter

Saturday, May 19, 2012

Fwd: GNU/Linux LPIC-1 Training Workshop - July 2012 @ AUST Beirut

Registration NOW OPEN

Registration form available at

LPI-101 – July 3 – 6, 2012

LPI-102 – July 9 – 12, 2012

In collaboration with LPI, the Linux Professional Institute, Ma3bar, the Arab Support Center for Free & Open Source Software, and the IEEE Computer Society – Lebanon Chapter, The Department of Computer Science at the American University of Science and Technology – AUST announces a GNU/Linux Training Workshop on the campus of AUST, Ashrafieh, Beirut, Lebanon. This workshop covers two courses, LPI-101 and LPI-102, which lead to the LPIC-1 renown certification.


As part of Ma3bar's mission to promote Free Software in Arab societies, this Training workshop is a fourth of a series of GNU/Linux training workshops that aim to increase the level of competence in Free Software environments.

LPIC-1 – Junior Level Linux Professional

LPIC-1 is the first IT certification program to be professionally accredited by National Commission For Certifying Agencies (NCCA). It requires passing both exams of LPI-101 and LPI-102. To pass this level, someone should be able to:

- Work at the GNU/Linux command line

- Perform easy maintenance tasks: help out users, add users to a larger system, backup & restore, shutdown & reboot

- Install and configure a workstation (including X) and connect it to a LAN, or a stand-alone PC to the Internet.

LPI-101 – July 3 – 6, 2012
      (4 sessions, 2:00–8:00 PM)

This course covers basic skills for the GNU/Linux professional that are common to major distributions of GNU/Linux.

Topics include:

- System Architecture
- Determine and configure hardware settings
- Boot the system
- Change run-levels and shutdown or reboot system
- GNU/Linux Installation and Package Management
- Design hard disk layout
- Install a boot manager
- Manage shared libraries
- Use Debian package management
- Use RPM and YUM package management
- GNU and Unix Commands
- Work on the command line
- Process text streams using filters
- Perform basic file management
- Use streams, pipes and redirects
- Create, monitor and kill processes
- Modify process execution priorities
- Search text files using regular expressions
- Perform basic file editing operations using vi
- Devices, Linux Filesystems, Filesystem Hierarchy Standard
- Create partitions and filesystems
- Maintain the integrity of filesystems
- Control mounting and unmounting of filesystems
- Manage disk quotas
- Manage file permissions and ownership
- Create and change hard and symbolic links
- Find system files and place files in the correct location

LPI-102 – July 9-12, 2012
      (4 sessions, 2:00–8:00 PM)

This course covers basic & more advanced skills for the GNU/Linux professional that are common to major distributions of GNU/Linux.

Topics include:

- Shells, Scripting and Data Management
- Customize and use the shell environment
- Customize or write simple scripts
- SQL data management
- User Interfaces and Desktops
- Install and configure X11
- Setup a display manager
- Accessibility
- Administrative Tasks
- Manage user and group accounts and related system files
- Automate system administration tasks by scheduling jobs
- Localization and internationalization
- Essential System Services
- Maintain system time
- System logging- Mail Transfer Agent (MTA) basics
- Manage printers and printing
- Networking Fundamentals
- Fundamentals of internet protocols
- Basic network configuration
- Basic network troubleshooting
- Configure client side DNS
- Security
- Perform security administration tasks
- Setup host security- Securing data with encryption

LPIC-1 certification exams

LPIC-1 certification requires passing both exams 101 and 102. These two exams can be taken anytime at any LPI certified exam center. It is recommended that trainees consider taking the exams at least two weeks after the workshop in order to have adequate time to prepare, practice and review. More information about the exams will be provided during the workshop.

Who should attend

- Beginner users in GNU/Linux environments who want to gain further technical skills

- Users of GNU/Linux who want to start building basic skills in system administration and configuration

- Computer professionals, fresh graduates, or students in graduate schools who want to become certified GNU/Linux professionals


Registration is open until June 22, 2012. Early bird registration fees apply on or before June 2, 2012. Registration is confirmed only upon filling the on-line application form at and receiving the full payment.

Travel & Accommodation

For participants coming from neighboring Arab countries, assistance and special rates on travel and accommodation will be arranged. Please contact for further details.

What you get

- Student material and study aid kit, including Approved Training Material – ATM, presentations, labs, electronic material, practice exams, and LPI portal access

- Certificate of attendance

- 90% certification success rate

Cost Matrix

LPI-101 (IEEE student member rate) *
LPI-102 (IEEE student member rate) * §
LPI-101 (student rate) §
LPI-102 (student** rate) §
LPI-101 (IEEE member professional rate) †
LPI-102 (IEEE member professional rate) †
LPI-101 (professional rate)
LPI-102 (professional rate)
LPI-101 (group professional rate) ‡
LPI-102 (group professional rate) ‡

Early bird rate

USD 400
USD 400
USD 450
USD 450
USD 750
USD 750
USD 900
USD 900
USD 800
USD 800

Full rate

USD    450
USD    450
USD    500
USD    500
USD    850
USD    850
USD 1,000
USD 1,000
USD    900
USD    900

* a valid IEEE student membership card for 2012 is required.
§ an eligible student must have a valid student ID from an accredited Institution of higher education, and should be less than 25 years of age.
† an eligible group consists of two or more persons delegated by an organization
‡ a valid IEEE membership card for 2012 is required.

Contact us

For more information, please contact:

Dr. Aziz M. Barbar
Chair, IEEE Computer Chapter, Lebanon
Chairperson, Department of Computer Science
AUST - American University of Science & Technology
Alfred Naccash Avenue – Ashrafieh
P.O. Box: 16-6452 Beirut 1100-2130, Lebanon
Tel : +961 (0)1 218 716, Ext. 311
Fax: +961 (0)1 339 302

You are receiving this message because you are registered to this mailing list at We respect your privacy and hold your communication preferences in the highest regard. If you have any suggestions or questions about the e-mail subscriptions, of if you wish to unsubscribe from this list, please contact us at:


LPIC-1 GNU/Linux Workshop, July 2012

Saturday, May 12, 2012

Bringing Open, User-Centric Cloud Infrastructure to Research Communities

Communities CORDIS News

European researchers working on the VENUS-C project have developed an open, scalable, and user-centered cloud computing infrastructure, highlighting an attempt to implement a user-centric approach to the cloud. Cloud computing empowers researchers "in a number of different ways, enabling them not only to do better science by accelerating discovery but also new science they could not have done before," says VENUS-C project director Andrea Manieri. The new infrastructure integrates easily with users' working environments and provides on-demand access to cloud resources as and when needed. "Our approach to the interoperability layer tackles current challenges with our users firmly in mind," Manieri says. The researchers used the VENUS-C infrastructure on Microsoft's Windows Azure platform to run BLAST, a data-intensive tool used by biologists to find regions of local similarity in amino-acid sequences of different proteins. The VENUS-C infrastructure made the experiment cost less than 600 euros and take just a week to process the data that normally would have taken more than year. "The advantage of using VENUS-C BLAST compared with renting cloud resources and deploying high-performance computing or high-throughput versions of BLAST is that deployment efforts are minimized and client impact is also minimal, since users don’t have to log-in on a different machine," says VENUS-C's Ignacio Blanquer.

View Full Article

Google Gets License for Driverless Car

(05/08/12) Thomas Claburn

Nevada has issued a license that permits Google to test its experimental self-driving cars on state roads. Google, which provided demonstrations of its autonomous cars on state freeways, highways, and roads in Carson City and Las Vegas to Nevada's Autonomous Review Committee, also received special red license plates bearing an infinity symbol. "We're excited to receive the first testing license for self-driving vehicles in Nevada," says a Google representative. "We believe the state's framework--the first of its kind--will help speed up the delivery of technology that will make driving safer and more enjoyable." The Nevada Department of Motor Vehicles says automakers also have expressed interest in testing autonomous vehicles in the future. Autonomous vehicle legislation was introduced in the California State Assembly in March, and Arizona, Hawaii, and Florida also are in the process of considering legislation. Meanwhile, Google recently acquired a Federal Communications Commission permit to operate automatic cruise control radar units in the 76.0-77.0 GHz band for driverless car navigation.

View Full Article

Friday, May 11, 2012

A successful Git branching model

Published: January 05, 2010

In this post the author present the development model that I’ve introduced for all of my projects (both at work and private) about a year ago, and which has turned out to be very successful. I’ve been meaning to write about it for a while now, but I’ve never really found the time to do so thoroughly, until now. I won’t talk about any of the projects’ details, merely about the branching strategy and release management.
It focuses around Git as the tool for the versioning of all of our source code.

Why git?

For a thorough discussion on the pros and cons of Git compared to centralized source code control systems, see the web. There are plenty of flame wars going on there. As a developer, I prefer Git above all other tools around today. Git really changed the way developers think of merging and branching. From the classic CVS/Subversion world I came from, merging/branching has always been considered a bit scary (“beware of merge conflicts, they bite you!”) and something you only do every once in a while.
But with Git, these actions are extremely cheap and simple, and they are considered one of the core parts of your daily workflow, really. For example, in CVS/Subversion books, branching and merging is first discussed in the later chapters (for advanced users), while in every Git book, it’s already covered in chapter 3 (basics).
As a consequence of its simplicity and repetitive nature, branching and merging are no longer something to be afraid of. Version control tools are supposed to assist in branching/merging more than anything else.
Enough about the tools, let’s head onto the development model. The model that I’m going to present here is essentially no more than a set of procedures that every team member has to follow in order to come to a managed software development process.

Decentralized but centralized

The repository setup that we use and that works well with this branching model, is that with a central “truth” repo. Note that this repo is only considered to be the central one (since Git is a DVCS, there is no such thing as a central repo at a technical level). We will refer to this repo as origin, since this name is familiar to all Git users.
Each developer pulls and pushes to origin. But besides the centralized push-pull relationships, each developer may also pull changes from other peers to form sub teams. For example, this might be useful to work together with two or more developers on a big new feature, before pushing the work in progress to origin prematurely. In the figure above, there are subteams of Alice and Bob, Alice and David, and Clair and David.
Technically, this means nothing more than that Alice has defined a Git remote, named bob, pointing to Bob’s repository, and vice versa.

The main branches

At the core, the development model is greatly inspired by existing models out there. The central repo holds two main branches with an infinite lifetime:
  • master
  • develop
The master branch at origin should be familiar to every Git user. Parallel to the master branch, another branch exists called develop.
We consider origin/master to be the main branch where the source code of HEAD always reflects aproduction-ready state.
We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.
When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.
Therefore, each time when changes are merged back into master, this is a new production release by definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.

Supporting branches

Next to the main branches master and develop, our development model uses a variety of supporting branches to aid parallel development between team members, ease tracking of features, prepare for production releases and to assist in quickly fixing live production problems. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually.
The different types of branches we may use are:
  • Feature branches
  • Release branches
  • Hotfix branches
Each of these branches have a specific purpose and are bound to strict rules as to which branches may be their originating branch and which branches must be their merge targets. We will walk through them in a minute.
By no means are these branches “special” from a technical perspective. The branch types are categorized by how we use them. They are of course plain old Git branches.

Feature branches

May branch off from: develop
Must merge back into: develop
Branch naming convention: anything except masterdevelop,release-*, or hotfix-*
Feature branches (or sometimes called topic branches) are used to develop new features for the upcoming or a distant future release. When starting development of a feature, the target release in which this feature will be incorporated may well be unknown at that point. The essence of a feature branch is that it exists as long as the feature is in development, but will eventually be merged back into develop(to definitely add the new feature to the upcoming release) or discarded (in case of a disappointing experiment).
Feature branches typically exist in developer repos only, not inorigin.

Creating a feature branch

When starting work on a new feature, branch off from the develop branch.
$ git checkout -b myfeature develop
Switched to a new branch "myfeature"

Incorporating a finished feature on develop

Finished features may be merged into the develop branch definitely add them to the upcoming release:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff myfeature
Updating ea1b82a..05e9557
(Summary of changes)
$ git branch -d myfeature
Deleted branch myfeature (was 05e9557).
$ git push origin develop
The --no-ff flag causes the merge to always create a new commit object, even if the merge could be performed with a fast-forward. This avoids losing information about the historical existence of a feature branch and groups together all commits that together added the feature. Compare:
In the latter case, it is impossible to see from the Git history which of the commit objects together have implemented a feature—you would have to manually read all the log messages. Reverting a whole feature (i.e. a group of commits), is a true headache in the latter situation, whereas it is easily done if the --no-ff flag was used.
Yes, it will create a few more (empty) commit objects, but the gain is much bigger that that cost.
Unfortunately, I have not found a way to make --no-ff the default behaviour of git mergeyet, but it really should be.

Release branches

May branch off from: develop
Must merge back into: develop and master
Branch naming convention: release-*
Release branches support preparation of a new production release. They allow for last-minute dotting of i’s and crossing t’s. Furthermore, they allow for minor bug fixes and preparing meta-data for a release (version number, build dates, etc.). By doing all of this work on a release branch, the develop branch is cleared to receive features for the next big release.
The key moment to branch off a new release branch from develop is when develop (almost) reflects the desired state of the new release. At least all features that are targeted for the release-to-be-built must be merged in to develop at this point in time. All features targeted at future releases may not—they must wait until after the release branch is branched off.
It is exactly at the start of a release branch that the upcoming release gets assigned a version number—not any earlier. Up until that moment, the develop branch reflected changes for the “next release”, but it is unclear whether that “next release” will eventually become 0.3 or 1.0, until the release branch is started. That decision is made on the start of the release branch and is carried out by the project’s rules on version number bumping.

Creating a release branch

Release branches are created from the develop branch. For example, say version 1.1.5 is the current production release and we have a big release coming up. The state of develop is ready for the “next release” and we have decided that this will become version 1.2 (rather than 1.1.6 or 2.0). So we branch off and give the release branch a name reflecting the new version number:
$ git checkout -b release-1.2 develop
Switched to a new branch "release-1.2"
$ ./ 1.2
Files modified successfully, version bumped to 1.2.
$ git commit -a -m "Bumped version number to 1.2"
[release-1.2 74d9424] Bumped version number to 1.2
1 files changed, 1 insertions(+), 1 deletions(-)
After creating a new branch and switching to it, we bump the version number. Here, is a fictional shell script that changes some files in the working copy to reflect the new version. (This can of course be a manual change—the point being that some files change.) Then, the bumped version number is committed.
This new branch may exist there for a while, until the release may be rolled out definitely. During that time, bug fixes may be applied in this branch (rather than on the developbranch). Adding large new features here is strictly prohibited. They must be merged intodevelop, and therefore, wait for the next big release.

Finishing a release branch

When the state of the release branch is ready to become a real release, some actions need to be carried out. First, the release branch is merged into master (since every commit onmaster is a new release by definition, remember). Next, that commit on master must be tagged for easy future reference to this historical version. Finally, the changes made on the release branch need to be merged back into develop, so that future releases also contain these bug fixes.
The first two steps in Git:
$ git checkout master
Switched to branch 'master'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2
The release is now done, and tagged for future reference.
Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.
To keep the changes made in the release branch, we need to merge those back intodevelop, though. In Git:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
This step may well lead to a merge conflict (probably even, since we have changed the version number). If so, fix it and commit.
Now we are really done and the release branch may be removed, since we don’t need it anymore:
$ git branch -d release-1.2
Deleted branch release-1.2 (was ff452fe).

Hotfix branches

May branch off from: master
Must merge back into: develop and master
Branch naming convention: hotfix-*
Hotfix branches are very much like release branches in that they are also meant to prepare for a new production release, albeit unplanned. They arise from the necessity to act immediately upon an undesired state of a live production version. When a critical bug in a production version must be resolved immediately, a hotfix branch may be branched off from the corresponding tag on the master branch that marks the production version.
The essence is that work of team members (on the develop branch) can continue, while another person is preparing a quick production fix.

Creating the hotfix branch

Hotfix branches are created from the master branch. For example, say version 1.2 is the current production release running live and causing troubles due to a severe bug. But changes on develop are yet unstable. We may then branch off a hotfix branch and start fixing the problem:
$ git checkout -b hotfix-1.2.1 master
Switched to a new branch "hotfix-1.2.1"
$ ./ 1.2.1
Files modified successfully, version bumped to 1.2.1.
$ git commit -a -m "Bumped version number to 1.2.1"
[hotfix-1.2.1 41e61bb] Bumped version number to 1.2.1
1 files changed, 1 insertions(+), 1 deletions(-)
Don’t forget to bump the version number after branching off!
Then, fix the bug and commit the fix in one or more separate commits.
$ git commit -m "Fixed severe production problem"
[hotfix-1.2.1 abbe5d6] Fixed severe production problem
5 files changed, 32 insertions(+), 17 deletions(-)
Finishing a hotfix branch
When finished, the bugfix needs to be merged back into master, but also needs to be merged back into develop, in order to safeguard that the bugfix is included in the next release as well. This is completely similar to how release branches are finished.
First, update master and tag the release.
$ git checkout master
Switched to branch 'master'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2.1
Edit: You might as well want to use the -s or -u <key> flags to sign your tag cryptographically.
Next, include the bugfix in develop, too:
$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff hotfix-1.2.1
Merge made by recursive.
(Summary of changes)
The one exception to the rule here is that, when a release branch currently exists, the hotfix changes need to be merged into that release branch, instead ofdevelop. Back-merging the bugfix into the release branch will eventually result in the bugfix being merged into develop too, when the release branch is finished. (If work indevelop immediately requires this bugfix and cannot wait for the release branch to be finished, you may safely merge the bugfix into develop now already as well.)
Finally, remove the temporary branch:
$ git branch -d hotfix-1.2.1
Deleted branch hotfix-1.2.1 (was abbe5d6).


While there is nothing really shocking new to this branching model, the “big picture” figure that this post began with has turned out to be tremendously useful in our projects. It forms an elegant mental model that is easy to comprehend and allows team members to develop a shared understanding of the branching and releasing processes.
A high-quality PDF version of the figure is provided here. Go ahead and hang it on the wall for quick reference at any time.
Update: And for anyone who requested it: here’s the gitflow-model.src.key of the main diagram image (Apple Keynote).
Feel free to add your comments!

Friday, May 4, 2012

Harvard and M.I.T. Team Up to Offer Free Online Courses

New York Times (05/02/12) Tamar Lewin

Harvard University and the Massachusetts Institute of Technology (MIT) announced a plan to offer free massively open online courses under their edX partnership. Overseeing edX will be a nonprofit organization that Harvard and MIT will govern equally, and each school has pledged $30 million to the initiative. EdX's inaugural president will be Anant Agarwal, director of MIT's Computer Science and Artificial Intelligence Laboratory, while Harvard's contribution will be supervised by provost Alan M. Garber. University officials say the new online platform would be used to research educational technologies and methods as well as to build a global community of online students. Included in the edX project will be engineering courses and humanities courses, in which crowdsourcing or software may be used to grade essays. Harvard Corporation's Lawrence S. Bacow says education technology currently lacks "an online platform that gives faculty the capacity to customize the content of their own highly interactive courses." The edX effort faces competition from similar partnerships between Stanford, Princeton, the University of Pennsylvania, the University of Michigan, and Coursera. The rapid evolution of online education technology is such that those in the new ventures say the courses are still in an experimental stage.

View Full Article

Thursday, May 3, 2012

PuTTY: A Free Telnet/SSH Client

PuTTY is a free implementation of Telnet and SSH for Windows and Unix platforms, along with an xterm terminal emulator. It is written and maintained primarily by Simon Tatham.
The latest version is beta 0.62.
LEGAL WARNING: Use of PuTTY, PSCP, PSFTP and Plink is illegal in countries where encryption is outlawed. I believe it is legal to use PuTTY, PSCP, PSFTP and Plink in England and Wales and in many other countries, but I am not a lawyer and so if in doubt you should seek legal advice before downloading it.

Use of the Telnet-only binary (PuTTYtel) is unrestricted by any cryptography laws.

Latest news

2011-12-10 PuTTY 0.62 released
PuTTY 0.62 is out, containing only bug fixes from 0.61, in particular a security fix preventing passwords from being accidentally retained in memory.
2011-11-27 PuTTY 0.62 pre-release builds available
PuTTY 0.61 had a few noticeable bugs in it (but nothing security-related), so we are planning to make a 0.62 release containing just bug fixes. The Wishlist page lists the bugs that will be fixed by the 0.62 release. The Download page now contains pre-release snapshots of 0.62, which contain those bug fixes and should be otherwise stable. (The usual development snapshots, containing other development since 0.61, are also still available.)
2011-07-12 PuTTY 0.61 is released
PuTTY 0.61 is out, after over four years (sorry!), with new features, bug fixes, and compatibility updates for Windows 7 and various SSH server software.
2010-05-17 Google listing confusion
Several users have pointed out to us recently that the top Google hit for "putty" is now not the official PuTTY site but a mirror that used to be listed on our Mirrors page.
The official PuTTY web page is still where it has always been: