venerdì 23 febbraio 2007

Subversion not only for source code

I have lot of important documents, on different computers, in different locations, for different purposes.
I wanted to keep all this data as safe as I can, so I’ve set up a repository on our Subversion server for this purpose.
With TortoiseSVN I have a very nice user interface that permits me to easily keep all this documents both safe
(out server has an automatic backup system) and versioned (wow! Once you get used to it, you can’t do without!).
But subversion can be very useful even on a single machine: install subversion and TortoiseSVN, and create a repository:
svnadmin create file:///c:/svn/Documents
Now, you can add to the repository all the documents you want to keep safe and versioned (eventually organized into directories):
using explorer, open your Document folder, right-click on the document you want to add, go on the TortoiseSVN menu,
and select “add” (or “Aggiungi”, if you have the Italian language pack ;-) ).
Well, that’s all! Whenever you modify the document, its icon will change, and you’ll be able to commit it to the repository,
and do the same things you usually do on your versioning system when you work with source code.
To do a backup copy of the repository, you can use svnadmin with the dump or hotcopy commands
(I’m using both of them on our server, to keep a mirror with hotcopy and a full backup on DVD with the dump).
It’s really easy and fast to set up, even easier to use, useful… and open source! Try it!

Get subversion
Get TortoiseSVN

Here also a small Backup.bat utility I use to backup my repos:

@echo off
cd c:\svn
for /D %%i in (*) do svnadmin dump c:\svn\%%i > c:\Backup_svn\%%i_dump
cd C:\Backup_svn
del *.old.7z
ren *.7z *.old.7z
for %%f in (*_dump) do C:\Programmi\7-Zip\7z a %%f.7z %%f
for %%f in (*_dump) do del %%f
cd ..


For this script you'll need 7z as well.

sabato 17 febbraio 2007

.NET and svn

I've finally started my new job! I have to join the development of different projects in C#. There are 2 up to 7 developers for each project; until now, they used Visual Source Safe to handle all the projects, but now we have the need to export over the Internet the ability to work with centralized source. Also, from some locations we have the ability to access only the web via http or https (there are firewalls which we can't configure specifically). The solution was to use Subversion - thanks Simon, even if you'll hate me for using it with Microsoft ;) - in combination with Apache to set up a centralized service for source revision, control, and backup. All you need is:
and optionally:
Yesterday I've set up a beta environment, lets do some testing.
Keep tuned!

JUG-TO Meeting - febbraio 2007

Ok, nuovo mese, nuovo JUG meeting.
Questa volta niente ospiti internazionali (phew!), ma Bruno e Domenico ci parlano rispettivamente di UML for dummies, e di Decorator pattern.
Ovviamente solo un vuaier si perderebbe il meet...

mercoledì 24 gennaio 2007

Web 2.0 vs Client/Server

Web 2.0 is the trend! Any application developed from 2-3 years to this part is web based; I've seen even intranet applications developed with a web based interface, even if this meant a drastic reduction of performance, and a even more drastic increase of development time. Web based application are, from the administrative point of view, maintenance free, as they don't need to be deployed over the clients: I think this is the most relevant feature, and the only reason why a intranet application should be development as a web application.

But there are some good news from the traditional client/server side too: technologies like Java Webstart can drastically decrease the administration effort in distributing/maintaining applications. And also, Java code can be run on different architectures, just like web pages can be view by browsers running on almost any machine; still, I believe that client/server systems are most efficient, easier to develop, most capable on the user interface.

Surely every time I need to develop a new system, I carefully analyze pros and cons of the two models.

sabato 20 gennaio 2007

Open Terracotta

Open source and the low cost of hardware capable of advanced features - such as main boards with integrated RAID controllers, multi core processors, faster and larger RAM modules (features that few years ago were offered only on high level systems), are now offering the chance to get impressive fault tolerance and clustering features on non-dedicated systems; lots of applications like web services could gain great benefit from this kind of features, and even new approaches to normal office applications became possible: with the reliability and performance of a high end system, it is possible to implement departmental web implementations of word processors or spreadsheet with a low enough cost to compete with the licensing cost of traditional office suites. Up to now I've been thinking of cluster able software as a clustering engine (an API) used in a custom designed application: something like an application with transactions or similar constructs used to keep consistency through the various nodes of the cluster. Up to now.
Yesterday, Jonas Boner from Terracotta Inc., presented in occasion of the Turin JUG meeting, gen. 2007, Open Terracotta. Please, have a look at it! terracotta.org.
One important thing Jonas was saying is that Open Terracotta is proposing a model of clustering/replication of enterprise Web 2.0 applications based on two layers: the usual database, which is responsible to maintain clustering/replication for the persistence layer - that is the data you usually store in the DB, and Terracotta, which is aimed to maintain consistency through cluster nodes for the Session Data – that is the data about the user logged in.
I think this kind of approach is the most effective, and fits perfectly in a Web 2.0 application. But what about other contests? First of all it's important to focus on the reasons that are leading to a clustering choice: it is a matter of scaling capability, fault tolerance, or both? It's very important to understand that in the case fault tolerance becomes a concern, a “software only” solution could not be sufficient, exposing for example the network layer as a SPOF; in this contest, software becomes part of the system (and not the system itself), so the whole system must be designed as a fault tolerant composition of software AND hardware. Assumed this, I still see in Terracotta a great and easy (for the developer) way to write software focusing on the business domain instead of having to deal with the technical detail of managing replication consistency.

Java was the first language I used with an integrated concurrency model (synchronized keyword, wait() and notify(), the new concurrent package): I thought this was such a leap forward for programmers, but yesterday we discussed the “what if” scenario in which Terracotta's heap consistency clustering approach is plugged inside the open source JVM; this could lead to new great applications! Imagine how easy could be to implement a chat system, or in general any kind of application (client based or web based) using shared informations. But personally I think that Sun doesn't want to plug into the free VM features that it's willing to sell to its customers!