Showing posts with label Misc. Show all posts
Showing posts with label Misc. Show all posts

Friday, December 30, 2011

What is software architecture

Practical definition


Many people have different interpretations of what software architecture actually means. Some people take it as the high level decomposition of the system (subsystems or physical distribution). However, a much better interpretation is simply "the most important design decisions". What are important design decisions? If they are going to impact what you (or your customer) care the most, then it is important. Typically, if the design decision is going to impact the availability, performance, scalability, security, usability, manageability, maintainability, cost, then it is part of the architecture.

Examples


Below are some examples of architectural decisions:

  • Availability: how to build in redundancy in all the layers (storage, middle tier) and how to fail-over (e.g., client initiated vs virtual IP)? How to handle excessive load (e.g., QoS)? Selecting well proven products with strong support?

  • Scalability: how to scale (scale out or scale up) and load balance?

  • Performance: multi-threading, algorithm, processing in batches?

  • Security: encrypting data against exposure, using HMAC to check data integrity, secure communication, auditing?

  • Manageability: monitoring and operational control (e.g., logging, JMX)? Live upgrading (OK to upgrade just one node at a time in a cluster)?

  • Maintainability: automated tests, using a language, tools and frameworks familiar to the team, proper layering of the code (UI, service, data access layers)?

  • Cost: language, tools and frameworks familiar to the team, open source vs commercial products?


Specifying and maintaining the architecture


I find the best way to specify the architecture is to explain it in use cases (e.g., how to perform fail-over when a certain component fails or how to ensure security in a typical use case).
Also, specifying the architecture is just the first step. Some team members may make sub-optimal architectural decisions in programming without knowing it, or you may realize that your initial decisions can be further improved when you see the code. Therefore, as the architect, your work is far from over and you must keep the architecture (as implemented in the code) fit as the project goes.

Friday, July 15, 2011

A simple but highly useful feature request for DNS

Most people believe that by having two Windows domain controllers can provide transparent fail over, i.e., if one DC fails, the clients will automatically use the other. However, this is not true. The client will simply use the first DC returned by the DNS. Similarly, if you use DNS to load-balance between multiple web servers, when one of them fails, some clients will still be directed to it.
To fix the problem, there is a very simple solution: enhance the DNS server to perform a health check against the resulting host of the resource record. For example, the administrator could specify the TCP port to connect to as in the imaginary syntax below:
  www        A      1.1.1.1    80     ; return this record only if we can connect to its TCP port 80
www A 1.1.1.2 80
www A 1.1.1.3 80

Of course, the health check could be more general, then you could use a script:
  www        A      1.1.1.1    web-check.sh  ; return this record only if the script returns true

where the IP would be passed to that script as an argument for checking.
It works for domain controllers too:
  _ldap._tcp.dc._msdcs.foo.com.   SRV  1.1.1.1  dc-check.sh
_ldap._tcp.dc._msdcs.foo.com. SRV 1.1.1.2 dc-check.sh

Finally, one might ask why implement this checking in the DNS server instead of the clients? The idea is that problems should be detected as early as possible to avoid bad effects downstream. In concrete terms, if a server is down but the DNS server (broker) still refers the clients to it, many clients will need to perform this health check themselves. But if the DNS server performs this health check, the checking is only done once, saving a lot of trouble downstream.

Sunday, May 30, 2010

Architectural improvements for JVM to be enterprise ready

Time proven architecture for long running services


I've observed that once in a while our long running Tomcat instance will get slower and slower, until it (the JVM) is restarted. Obviously the Tomcat developers are experts in enterprise Java, why is it still happening and how to fix it? One could look for specific problems in the code, but a much better approach is to adopt a time proven architecture, particularly well-known in traditional Unix daemon processing:

  1. The master daemon process is started.

  2. The master daemon process spawns some child processes to handle client requests

  3. After handling a limited number of requests, a child process will terminate itself (or done by the master). The key point here is that the OS will free any resources (e.g., memory, file handles, sockets) allocated to that child process; there is no way to leave anything behind after its death.


This architecture ensures a long-running service without any degrading, even if the code is poorly written and has resource leaks.

How can Java support this architecture?


So, does Java support this currently? The answer is no. To do that:

  1. The JVM needs to support the  concept of isolated process.

  2. Objects created by the JVM itself and different a process must NOT be allowed to refer to one other, otherwise there would be no way to cleanly kill a process.

  3. The JVM must allow efficiently forking a process to quickly create a child process. This way, the master process can perform all kinds of lengthy initialization (e.g., Hibernate and Spring initialization), while it is still very quick to spawn a child in the initialized state.

Friday, May 21, 2010

Open letter to certification providers

Dear certification providers,
In recent months I've interviewed three candidates with CCNP certifications and I was very disappointed to find that two of them didn't know how a switch differs from a hub or how it learns the MAC addresses. My first reaction was my disappointment with the lack of integrity of the people and the general availability of brain dump or even real test questions. However, from a more positive view, this has stroked me to think about what we can do to help you improve the situation?
If we consider certifications as a service in the perspective of quality management, then we can clearly see there is a huge problem in it: the exam candidate is both a client and a user (and also a piece of client-provided material :-) ), but there are many other users of this service out there; they are the prospective employers, job interviewers, peers and etc. A key requirement in quality management is to measure if the users are satisfied with the service output. Obviously, as a prospective employer, I am very unhappy with the service output because the CCNP certificate is not reflecting the real expertise of the certificate holders. However, none of you are providing ways for unhappy users like me to provide feedback on your service output. Without such a feedback mechanism, I really don't see how you can ensure the quality of your service.
Therefore, I'd request that you establish a mechanism to accept feedback. For example, provide a website to let me report on those certificate holders who obviously know little about the subject matter. Then follow up with investigation, re-certify as required and revoke their certificates. Just like a digital certificate whose private key has been compromised, such revoked certificates should be published on a website like a CRL.
This is about handling an individual incident. If there are a significant number of such incidents, you should escalate to become a problem (in ITIL term). It means that you must then identify the root cause and to plug the hole to prevent similar incidents, like introducing performance-based exams, reference-based certifications and whatever it takes to fix the problem and save your reputation.

Sunday, February 7, 2010

TDD adapted for mere mortals

TDD adapted for mere mortals

I've been teaching and practicing agile for several years and there is definitely a problem with TDD: People find it very difficult to use. I believe there are certain points, either in the TDD itself or in people's interpretation of it, that should adapted (at least for mere mortals):

Writing test before code

It is definitely a very good practice to interweave coding and testing. This is what we programmers want to do; we feel the urge to test run a certain piece of code as it feels complicated. However, writing the test before code is not a natural way in many cases. For example, let's consider a BookService class. You'd like to implement its borrow(String borrowerId, String callNo) method. If you insists on writing the test first, you'll have to think very hard what collaborators the BookService object will use. It is not only difficult, but most likely incorrect. A much more effective way is to write the borrow() method first, then you can see what collaborators it needs (e.g., a BookDAO, a BorrowerDAO, a system clock and etc).

Most TDD demos don't have this problem because they work on classes that need no collaborators, for example, stacks, calculators.

Note that I am not advocating writing the complete code before writing the test; we should build the functionality in suitable steps. For mere mortals, try to implement the basic functionality first, then test it, then write more code and then more test.

My suggestion is to replace "writing test before code" with "interweaving coding and testing".

Take the smallest step that makes the test pass

I agree that we shouldn't write too much code without test-running the code. If we do, it's difficult to isolate the bug. But why always take the smallest step if we are pretty sure that it is going to work? The size of the step depends on the complexity of the code. We shouldn't take too large a step (hard to isolate a bug), nor too small a step (waste of time).

The whole idea is a well-established principle in testing: Risk-based testing. That is, we should put more effort on testing high-risk code, and less on low-risk code. Programmer's effort is the most scarce resource in software development projects. So, we should prioritize its use wisely.

My suggestion is to replace "take the smallest step possible" with "take the smallest step before you're worried with the correctness of the code".

If you aren't doing TDD, you aren't professional

This is not defined in TDD, but many people believe it. I think this is against the agile manifesto which says we should value people over process. Forcing TDD into people's throats is exactly the opposite. If people have tried but it doesn't help them, they will simply not use it. It's that simple. People should have every right to use whatever works best for them.

In fact, most programmers like testing: they feel the urge to test run the code if it gets complicated. It's just that writing the test before code is so difficult and against their nature. Therefore, our process should work with their nature, not against it.

My suggestion is to replace "every professional programmer should do TDD" with "every professional programmer should keep looking for their own best practices".

TDD helps you design the API of your code

This doesn't make much sense to me at all. The user requirement guides you in implementing your UI classes. When implementing the UI classes, you are guided to design the API of your service classes. When implementing your service classes, you are guided to design the API of your DAO classes.

The real design aspect of TDD is not about the design of the API, but to make sure your code is loosely-coupled and thus is easy to test.

My suggestion is to replace "TDD helps you design the API" with "testing helps make your code loosely-coupled".

 

Sunday, January 31, 2010

Making your manual mocks resistant to interface changes

Making your manual mocks resistant to interface changes

Suppose that you'd like to unit test the code below:

 public class BookService {
    private Books books;
    private Patrons patrons;
    
    public void borrow(String patronCode, String callNo) {
        Book b = books.get(callNo);
        Patron p = patrons.get(patronCode);
        if (b.isOnLoan()) {
            throw new RuntimeException("already on loan");
        }
        b.addBorrowRecords(new BorrowRecord(p, new Date()));
    }
}

public interface Books {
    Book get(String callNo);
}

public interface Patrons {
    Patron get(String patronCode);
}

You'll need to mock the two DAO objects: Books and Patrons. Usually I will hand-create these mock objects instead of using mock frameworks (in order to access the fields of the test case in a mocked method. For the other reasons, see Uncle Bob's article). So, the test may look like:

 public class BookServiceTest extends TestCase {
    private Book b123 = new Book("123", "Java programming");
    private Patron kent = new Patron("001", "Kent");

    public void testBorrow() throws Exception {
        BookService bs = new BookService();
        bs.setBooks(new Books() {
            @Override
            public Book get(String callNo) {
                return callNo.equals("123") ? b123 : null;
            }
        });
        bs.setPatrons(new Patrons() {
            @Override
            public Patron get(String patronCode) {
                return patronCode.equals("001") ? kent : null;
            }
        });
        bs.borrow("001", "123");
        List<BorrowRecord> records = b123.getBorrowRecords();
        assertEquals(records.size(), 1);
        BorrowRecord record = records.get(0);
        assertEquals(record.getPatron().getName(), "Kent");
    }
}

The problem is that if you later add a method to the DAO interfaces such as:

public interface Books {
    Book get(String callNo);
    void add(Book b);
}

Your unit test code will break because you aren't implementing the add() method. To avoid this problem,  the idea is to first create an (abstract) mock class implementing only the needed methods such as get(). Then use an automatic way to further create a subclass that provide all the dummy methods. To do the latter, one can use cglib. Here is an example:

public class BookServiceTest extends TestCase {
    private Book b123 = new Book("123", "Java programming");
    private Patron kent = new Patron("001", "Kent");

     public abstract class MockedBooks implements Books {
        @Override
        public Book get(String callNo) {
            return callNo.equals("123") ? b123 : null;
        }
    }
    public void testBorrow() throws Exception {
        BookService bs = new BookService();
        bs.setBooks(mock(MockedBooks.class));
        bs.setPatrons(new Patrons() {
            @Override
            public Patron get(String patronCode) {
                return patronCode.equals("001") ? kent : null;
            }
        });
        bs.borrow("001", "123");
        List<BorrowRecord> records = b123.getBorrowRecords();
        assertEquals(records.size(), 1);
        BorrowRecord record = records.get(0);
        assertEquals(record.getPatron().getName(), "Kent");
    }
    @SuppressWarnings("unchecked")
    private <T> T mock(Class<T> c) {
        Enhancer enhancer = new Enhancer();
        enhancer.setSuperclass(c);
        enhancer.setCallback(NoOp.INSTANCE);
        //Because MockedBooks is a non-static inner class, need to provide the outer instance
        return (T) enhancer.create(new Class[] { getClass() },
                new Object[] { this });
    }
}

 To make the code reusable in multiple test cases, just extract it into a base class:

public class ChangeResistantMockTest extends TestCase {
    @SuppressWarnings("unchecked")
    public <T> T mock(Class<T> c) {
        Enhancer enhancer = new Enhancer();
        enhancer.setSuperclass(c);
        enhancer.setCallback(NoOp.INSTANCE);
        return (T) enhancer.create(new Class[] { getClass() },
                new Object[] { this });
    }
}

public class BookServiceTest extends ChangeResistantMockTest {
    private Book b123 = new Book("123", "Java programming");
    private Patron kent = new Patron("001", "Kent");

     public abstract class MockedBooks implements Books {
        @Override
        public Book get(String callNo) {
            return callNo.equals("123") ? b123 : null;
        }
    }
&
nbsp;   public void testBorrow() throws Exception {
        BookService bs = new BookService();
        bs.setBooks(mock(MockedBooks.class));
        ...
    }
}

Acknowledgement: I got this idea from the Scala mailing list.

Friday, January 1, 2010

Installing MIT Scratch on Kubuntu 9.10

Installing MIT Scratch on Kubuntu 9.10

Below are the steps to get Scratch working, including playing audio (recording quality is still quite poor, but you can always record outside of Scratch).

Install PulseAudio

$ aptitide install pulseaudio pulseaudio-utils

Install the latest squeak

The Squeak package included in Ubuntu 9.10 can't play audio (wav). Fortunately, the latest version (3.11.3.2135) works. So, go to http://www.squeakvm.org/unix to download the binary package. However, if you try to download the .deb package, it will say that you don't have the permission. So, download the rpm install and then convert it:

$ alien Squeak-3.11.3.2135-linux_i386.rpm
$ dpkg -i squeak_3.11.3.2135-2_i386.deb

Install Scratch 1.4

Download the Scratch source from http://info.scratch.mit.edu/Source_Code. Unzip it and you will get the VM file ScratchSourceCode1.4.image. Before you can run it, you need to know that it relies on some "plugins" that are written in C language for each platform. So, download the source code to the plugins from that same page. Then compile them. The package contains three plugins: take one of the plugin called ScratchPlugin as an example:

$ cd ScratchPluginSrc1.4
$ cd ScratchPlugin/ScratchPlugin-linux/
$  ./build.sh
$ sudo cp ScratchPlugin /usr/lib/squeak/3.11.3-2135/so.ScratchPlugin

For the UnicodePlugin, you need to do some extra steps:

$ su aptitude install libpangomm-1.4-dev
$ su aptitude install libcairo2-dev
$ add an option no-stack-protector to the gcc command in the unixBuild.sh file:
       ....
       gcc -fno-stack-protector -fPIC -Wall -c `pkg-config --cflags pangocairo` *.c
       ....
$  ./unixBuild.sh
$ sudo cp UnicodePlugin /usr/lib/squeak/3.11.3-2135/so.UnicodePlugin

Running it

To run Scratch, type:

$ squeak ScratchSourceCode1.4.image

It should work.

Sunday, December 6, 2009

open source web-based centralized console for tripwire

When using tripwire to monitor changes on multiple servers, it is common to have to review and accept changes on a daily basis. Logging into multiple servers to accept the changes is troublesome. So, I've created a  web-based centralized console to review and accept changes. It is GPL licensed and can be downloaded from http://centralwire.sourceforge.net. Hope it is useful to others.

Friday, December 4, 2009

Handling BLOB/CLOB in postgreSQL with JDBC/Hibernate

Handling BLOB/CLOB in postgreSQL with JDBC/Hibernate

What is a BLOB/CLOB? It is a large object (binary for BLOB and char for CLOB). It is commonly used when you're storing large files into the database as column values.

It was really a challenge to work with BLOB/CLOB and in the process I almost pulled my hair out. Below are the hard lessons learned:

  • Avoid using LOB if possible! If say, it is just a few KB in size, just treat it as a string or byte array.
  • You must NOT try to read the stream outside the transaction. Note that using the open session in view is NOT enough. Once the transaction is ended, even if the Hibernate session is still open, you still can't read it (you'll get the "invalid large object descriptor" exception).
  • You must NOT try to read a LOB twice in the transaction. The "pointer" can't seem to be reset so you read past some portion, you can't read it again.
  • I still haven't figured this one out: If you repeatedly read a row containing a LOB in different transactions, it may cause the same "invalid large object descriptor" when you try to read it in a new Hibernate session.

Sunday, October 4, 2009

Javablackbelt a rip off?

I always thought that javablackbelt was a non-profit community, but actually it is not. It is actually selling the questions people submit for profit! I think this is ethnically not right:

  1. It always advertises as a community, but actually it is a for-profit company.
  2. It calls the license copyleft, but actually it is not Gnu copyleft. It explicitly grants itself the right to sell the questions.
  3. It should highlight this term. Currently there is no obvious license displayed when people try to submit a question.
  4. This is unlike RedHat making money from open source software because that software is licensed by the authors to be freely available for anyone, so RedHat has to provide its own added value. If javablackbelt is going to sell the questions, they should share the profits with the authors.

Friday, January 16, 2009

PMP exam

Sat and passed the PMP exam in HK. It is very similar to those free example questions on  www.oliverlehmann.com. All I used for studying was the book by Joseph Philips and dthe free questions mentioned above. The book covers almost everything in the exam, even though it would easier to read if it included more concrete examples.

Sunday, December 21, 2008

The national Information System Integration Project Manager certification

Today I sat the national Information System Integration Project Manager certification. It consists of three parts:

  1. 75 Multiple choice questions. The questions mainly surround the PMP PMBOK. There are some on IT technical concepts as those found in MCSE, a couple on software development processes/notations such as SDLC, (structured analysis and design), RUP and UML, some on IT project supervision, some on laws and some on English. If you have studied PMP and have a strong IT background, this part is not that difficult.
  2. Three case studies and three questions for each. Those case studies were quite interesting and realistic. If you understand the PMP framework and have managed IT projects, the part shouldn't be that difficult either.
  3. Essay writing on one of two topics in IT project management. This part, in my view, is here to let you show/share your IT project management experiences. If you have managed IT projects, you should be able to think, organize and express your such experiences in the PMP framework.
Even though I don't know if I'll pass it, I think this exam is a pretty good one. Unlike the PMP exam, it goes beyond multiple choice questions (knowledge/concepts) to test if the candidate does have real world experiences in IT project management. Unlike the CISSP exam which takes six hours without break, it has breaks between each part so the candidate can get a good rest.

CISSP exam

Sat and passed the CISSP exam in Oct 2008. It was easier than I expected. I mainly used the open CISSP study guide as the study materials. I did spend a lot of time reading the CISSP preparation guide by Ronald Krutz but I felt that it did NOT help at all. All you need is the open CISSP study guide plus some (or quite some) knowledge on networking (as found in MCSE).

Overall I don't think CISSP qualifies someone as an expert security; it only proves that the holder has taken the initiative to learn a little about everything in security.

 

Saturday, July 12, 2008

How to install XP SP2 on HP mininote

Installing XP on an HP mini-note is a very challenging process if you don't know how to do it. Below are the key steps:

  1. Shrink the preloaded Vista partition. For example, boot with the System Rescue CD, type startx to enter X11 and run gparted to do it graphically. Easy.
  2. If you go ahead to install it using the XP CDROM, setup will fail with a BSOD (blue screen of death) with error code 0x0000007b after loading its builtin drivers. This is because XP SP2 needs a VIA chipset driver in order to properly access the SATA disk in protected mode. To fix this problem, boot into Vista, download and unpack the VIA chipset driver from HP. There is a sub-folder named "driver" and that's what you need. Copy the XP files from the CD, use nLite and try merge the drivers into the files (tell it to load all the drivers found in that "driver" folder and then exclude inappropriate ones such as the 64bit ones), make a new iso image and burn it into another CD. To burn an ISO image, follow these steps.
  3. Use that CD to boot and install XP.
  4. Now, you can only boot into XP but not Vista because XP has taken over the MBR. To fix the problem, follow these instructions to restore the Vista boot manager.

Sunday, June 29, 2008

How I got over the hurdles to migrate from XP to Kubuntu 8.04

Why installed Kubuntu?

Short answer: I planned to plugin my XP hard disk on a new computer, but it didn't work. At the same time I had always been thinking about the idea of switching to Linux as my home major desktop one day, as I mainly do Java development and writing (OpenOffice) at home. Why KDE instead of Gnome? I always like KDE.

Long answer: My original computer hardware failed to boot. As I have an existing retail copy of XP, so I ordered a Dell computer without an OS (N-series. BTW, the N-series are not available on their Hong Kong web site, you have to contact a salesperson using the instant-messaging to get a custom quotation. BTW, the salesperson was very efficient & helpful. Do not try to contact pre-sales using emails. I got no reply at all). On arrival I found that it has only the SATA interface, so I couldn't put my existing IDE hard disk with XP nor my existing CDRW drive into it and thus it couldn't boot. The only way to install an OS on it immediately was PXE. I don't know if XP can be installed like that, but I had done it with Ubuntu at work.

Network install

So I set up a TFTP & DHCP server on my brother's XP computer to start a network install of Ubuntu according to this article. It was easy and took less than 10 minutes in setup. I had to enable PXE in BIOS (by default it was disabled).

Do NOT use Kubuntu 64bit nor KDE4

At first I chose to install Kubuntu 64bit and KDE4. But later reinstalled using Kubuntu 32 and KDE 3.5.

64bit is not going to speed up anything as my computer has 2G (64bit is good only if >4G & you have programs using 64bit pointers, otherwise you're just wasting the memory by storing 32bit points in 64bit cell). In addition, 64bit versions of the software just aren't as stable as 32bit ones. For example, 64bit OpenOffice can't find the JRE.

KDE4 is unstable as hell. A lot of crashes in here and there in all KDE applications (adept, network manager, user manager, etc). Help indexing doesn't work. Network manager failed to connect and crashed. Some software requires user interaction during install such as agreeing to the JDK license, but adept will not show you that so you are left wondering why the progress is stuck at a certain point. Even if you tell it to show you the details, somehow the text interface is not working. When editing user accounts in the user manager, it sets the uid of users to 0 (root)!

Don't think that KDE3.5 is old. It is very modern, powerful, beautiful and rock solid. Never had a single crash.  For me it can do everything that Windows can do in terms of a desktop environment.

Making the keypad work like Windows

I am used to the arrow keys in the keypad. In Kubuntu, they also work. But if the Shift key is pressed, they will input digits. To make them act as shift-arrow, I had to enable an option in the "regional & language" GUI (NOT the keyboard GUI!) that says "Shift with numeric keys works as in MS Windows" on the "Xkb Options" tab.

Ugly font in Firefox

You'll see ugly font in many web pages in Firefox. This is because the use of the helvetica font. Ubuntu maps it to the nimbus font using fontconfig. However, the nimbus font looks good in printing but ugly on screen. The easy solution is to turn on auto hinting for the fonts and sub pixel anti-aliasing (I don't know exactly what they mean though). To set it for each user, it can be done in the GUI. To set it for the whole system, I manipulated the symlinks in /etc/fonts/conf.d. I added 10-antialias.conf, 10-hinting-full.conf and 10-sub-pixel-rgb.conf from the /etc/fonts/conf.avail folder.

MS core fonts

I have OpenOffice documents using the Arial and courier fonts. They look completely different in Kubuntu due to the lack of such fonts. The best way to fix this problem is to install the MS core fonts. Just google and you'll find the details.

Making English the default locale for me

To allow my family members to use the computer, I chose Chinese as the system wide default language (LANG and LANGUAGE). For myself, I want English. This can be done easily using the KDE GUI (regional & language). However, it doesn't modify LANG (I don't know where it changes) and only works for KDE applications. Gnome applications such as Firefox & OpenOffice look at LANG and still display their UI in Chinese. To fix the problem, I had to modify .xsessionrc:

export LANG=en_US.UTF-8
export LANGUAGE=en

Inputing Chinese in the English locale

Even though I use mainly the English in my work, I still need to input Chinese from time to time. By default the en locale uses xim which doesn't work with Chinese. I had to configure it to use scim. This is managed by the "alternatives" mechanism in Ubuntu:

update-alternatives --config xinput-all_ALL

Using the Chinese version of KDE for family members

My family members will need the Chinese version of KDE. This can be done by choosing Chinese in the KDE GUI. However, Chinese will be unavailable for selection until you install some packages:

aptitude install kde-i18n-zhtw

Installing the Flash player

You need to install the flashplugin-nonfree package.

Enabling CPU frequency scaling

To save energy, basically I followed this article. I enabled CPU frequency scaling in the BIOS and add acpi_cpufreq to /etc/modules. I don't need th put any of the governor into the modules file though. By default the governor is "ondemand" which should be good for most people.

Directory tree in the file manager

I really wanted to get back the Windows explorer style file manager: the directory tree on the left hand side. Konqueror has this function: a view profile named "File Management". To launch it with this profile, create a shortcut with command line:

kfmclient openProfile filemanagement

MIT Scratch

I've been using MIT Scratch to teach my niece programming. Even though it is written in squeak and squeak is available for Ubuntu, it won't run due to the missing ALSA sound plugin. The easiest way to fix it is to run it in wine.

Games

My dad was running two Chinese chess games in XP. Only one of them runs in wine (in the process I needed to download DLLs such as the VB runtime or MSVC runtime. Just google for them). I can't find any good Chinese chess games that can run on Linux (including Java).

My mom wants to play MahJong. I found a flash MahJong game that runs fine in Firefox.

Web authoring

I was using Nvu in XP. In Kubuntu, I installed Quanta plus. It works quite well.

Online backup

I was using Mozy which has worked very well. Because it has no Linux client, I am going to switch to spideroak. Don't know if it works or not yet.

Ctrl-F11 in Eclipse doesn't work

I am used to pressing Ctrl-F11 in Eclipse to re-run the last program. In Kubuntu it doesn't work because it is the shortcut to switch to the 11th virtual desktop. To fix it, go into the keyboard GUI and choose the "shortcut sequences" and set the shortcut to none.

Conclusion

Overall I find that Kubuntu needs a few critical tweaks before it is useable by end users, in particular, in the CJK market. Such tweaks are not very well documented and take quite some serious research efforts, even for an experienced Linux/Ubuntu server administrator. However, once it is tweaked, it is fast, rock solid, powerful and cool. Some applications may be unavailable (eg, games, Mozy) and limited in capabilities.

Friday, March 21, 2008

Why people hate Tapestry (or even Howard)

Recently I've seen quite some people expressing their hatred towards Tapestry or Howard (eg, on theServerSide, on the Tapestry mailing list, blogs). Admittedly I made a mistake by recommending Tapestry to quite some people and organizations in Macau and now they're stuck with T4, however, objectively speaking, it is not the fault of Howard or Tapestry. Every one of us should take the responsibility for carefully evaluating any given technology before adoption, including its track record of compatibility and etc.

After all, all Howard did was to release his code for others to use for free under the Apache license. Presumably this is good will. In addition the Apache license clearly says there is no warranty of any kind. It's up to us to decide to use it or not. Sure, Howard might have made promises that he might be unable to meet ("This should finally crack the backwards compatibility nut, allowing you to have great assurance that you can upgrade to future releases of Tapestry without breaking your existing applications."). But everyone is entitled to publish his own objective and there is simply no guarantee that it will be realized. I guess some people are angry because they think they were tricked. However, I still think it is the responsibility of the technology evaluator to look for facts, not promises. There is no way to tell whether someone is lying or just wishful thinking. Ultimately, we are free to adopt something or not and it is our sole responsibility, be it commercial or open source.

Why is this an imporant issue? We, technology evaluators, must recognize our own mistakes in order to avoid repeating it in the future.