Category: Software

  • How to increase HDD size for a VM

    Quick step by step guide on how to increase HDD size for a VM

    Steps required are,
    1. To increase the hard drive space, first increase the hard drive size in VMWare.
    2. Execute below commands to first recognize the partition
    # /sbin/fdisk /dev/sda
    p
    n, p, enter # select partition number
    enter
    p # to verify new partition
    w
    # reboot

    3. Add the new partition to total partition to increase the disk space
    # /sbin/pvcreate /dev/sda3 # Modify sda3 to the last sda
    # /sbin/vgdisplay | /usr/bin/head # Check the VG Name, and LV Name which we are going to extend
    # /sbin/vgextend vg_centos /dev/sda3 # Modify sda3 to the last sdan
    # /sbin/lvdisplay
    # /sbin/lvextend /dev/vg_centos/lv_root /dev/sda3 # Modify sda3 to the last sdan
    # /sbin/resize2fs /dev/vg_centos/lv_root
    # reboot

    4. You should now see increased disk space
    # df -h

    Steps have been adapted from
    http://www.bluhm-de.com/increase-centos-6.2-hard-drive-space-/-partition

  • What would future in the ‘cloud’ look like

    What would future in the ‘cloud’ look like

    I am always amazed at what computers have done so far. Many of us use, live and interact with a lot of devices, connected peripherals every second of our life. It has undoubtedly led towards a better, more social (perhaps), and a much engaging, but sophisticated ‘digital’ life.

    Whenever I look back over history on how this trend has progressed, the outcomes do seem to follow one particular pattern – progress, fueled by improving our ability to do some things we couldn’t accomplish so readily (and easily, economically) in the past. Things such as online shopping, commerce, ability to communicate, share important moments, bringing resources from one end of the globe to the other end in fraction of seconds. It is definitely a better place to be in today, than it was before. I believe most of us would now find it hard to imagine how could one live without an Internet, or even a mobile phone ever.

    One of the important achievements towards making this possible has been the big names such as Google, Amazon, IBM, Microsoft, HP etc pushing the limits of what software, hardware together can do to achieve a more connected, a more ‘alive’ digital world to come true. One of the most important topics of this decade (I would say) has been the advent of Cloud Computing.

    Cloud Computing, in a nutshell, is the ability to use computing resources – such as CPU, memory, storage – over the Internet. While the debate over whether Cloud Computing is just another market buzz is pointless in my opinion, it does has brought a few important questions to my mind as I think about what kind of future this progress can promise.

    Take a look at a mind-map I made recently when researching one of the most popular clouds offered – Amazon Web Services.

    Amazon Web Services 2014 Mind-Map

    The beauty of AWS (Amazon Web Services, for short) is how they’ve accomplished a feat with a clear mindset of making computing truly available as a utility. Of course, many in the corporate world would argue that such types of computing facilities already existed in virtualization world, and that is very true. Corporate worlds often had these needs to be able to spin out new machines, networks inexpensive yet keep economies of scale when you have to throw away any unused inventory that you no longer need. With a physical computer, it is hard to just get the right mix of specs to satisfy, but with a virtual one, things become slightly more manageable.

    One of the key strengths of the AWS, and perhaps the topic in question that I have in mind is, the ‘promise’ of utility computing, pay-as-you-go computing, ‘rent’ over ‘own’ type of computing.

    Historically, when computers were the size of a room, and mere mortals couldn’t afford (or even consider) to have one, IBM owned and dominated an industry of computing power available to corporations interested on a ‘rent’al basis. The premise was for IBM to offer computers to big companies, universities and let them use these under a pay-as-you-go sort of agreement. The idea was to be able to bill customers mainly on CPU time, but this soon expanded into an industry where it was a full-time job for one to understand, and perform accounting on CPU utilization, storage used, and even control the unwanted cost so that rented costs could be managed. I consider that this model had its run, and in some cases, is still prevalent in corporate worlds which use rental printers, infrastructure as a service, or some might say, even software as a service. This kind of model did have some benefits, mainly in terms of being able to not worry about maintenance of such complicated machinery, even not worry about servicing, maintenance since all of that could be bundled by IBM as a ‘service’.

    So, in a nutshell the benefits of using IBM’s model were

    1. You pay for only using what you used, nothing else. For a small cost, we take care of any hardware maintenance, patching, upgrades for you.
    2. No upfront costs, nor any costs down the line. We can offer, augment, decrease our rented resources to balance your needs
    3. Your environment, your assets. We give you the best computing resources to get things done cheaper, and faster.

    Of course, it may have been done differently for different sizes, different customers, different geographies, but I believe that was the promise expressed to alot of corporate consumers who started switching to personal computers, workstations before then. Of course, everybody wanted one computer of their own, and many still do.

    Now, when I look at AWS, I kind of feel many points are reminiscent of the time when powerful computing, networking, increased storage was not cheaply available for many of us to use. Networked computing, cost of managing servers, databases is hard, and not just that, it is ‘expensive’ too once you own it. Given the vast power offered by Amazon to anyone at their fingertips, many young entrepreneurs can surely do wonders. But if you consider for a moment about what goes with becoming together as a part of this ecosystem, you do have to question yourself – if you do wish to ‘pull-out’ at some point, would it be possible for you to do? The simple answer could be yes, but as with many things in life, an investment into something, even if it is a cloud, is not so easy (or practical) to undo.

    I predict that as with many trends within digital world, cloud computing will bring about a division within the digital world with big corporations driving market share towards cloud, yet still keeping their heads occupied towards new management issues resulting of out ‘wasted’ cloud usage. Similar to electricity savings during summers, there will be CoolBiz days where people are encouraged to optimize their ‘spend’ on utility computing. New jobs requiring cloud administration, monitoring, usage accounting on cloud will spring up. We’ll decide how advanced is a country by looking at its consumption charts on daily computing used compared to rest of the world. Most probably, utility computing might even become a government-owned service offered to anyone who would contract, and pay for its utilization.

    It’s not too often I compare the scales of performance that everyone sees of Amazon, Facebook, Google compared with their own, and be somewhat mentally assured about the fact that if you’re on the cloud, you can have all of that anytime. What many of us do fail to acknowledge is that when you’re the size of Google, Amazon or Facebook, you’ll have enough incentive to use a cloud, or even with the right team, have a mix of your own together with public clouds together. Whether we do have the incentive today or even now is definitely not an easy question. But a ‘lock-in’ into any kind of technology, be in PCs, cloud is always going to have implications on future for rest of us.

    As the digital world continues to become an increasingly complex world of its own, giants like Google, Amazon who are best are what they do will continue to fuel it. Whether one sees this as an opportunity, a trap, an evolution in networked digital computing is best left to people. But from an altogether different perspective, I believe one must see beyond the promise painted today into what it can become tomorrow.

    My advice – definitely exploit the cloud computing but in moderation. You’ll only want to live in a shared/rented apartment, no matter how good, for so long until you can finally own one of yours.

    I think most of us eventually do own one.

  • cld2 – Google’s Compact Language Detector 2 – standalone command line on Cent OS

    It appears that cld2 has no mention of how one would go about using it (or at-least that is the way it looks to me). The language detection ability is one of the better ones, and I decided to make use of it.

    I came across a blog mentioning how to install cld2 on ubuntu but it just fell short of using it directly through a command line. It mentions how to build a Python binding.

    Luckily, I also came across another blog where a Slackware script mentions building a command line tool which is perfectly what I was looking for, except that I had CentOS, not Slackware.

    So with a little bit of digging around the various compile scripts on cld2’s SVN trunk, I got a faint sense of combining the ideas from these two blogs, and give it a try. I succeeded! Here’s what I did

    1. Get g++, it is required to build cld2 on your CentOS machine
      $ /usr/bin/sudo /usr/bin/yum install gcc-c++
      ...
      $ which g++
      /usr/bin/g++
      
    2. Get the cld2 source through SVN on your local CentOS machine. In my case I used /tmp folder
      $ pwd
      /tmp
      $ svn checkout http://cld2.googlecode.com/svn/trunk/ cld2
    3. Next, make a copy of one of the already existing compile scripts to make a few changes, specifically compile_libs.sh. The step is mentioned already in how to install cld2 on ubuntu. I use 32-bit, hence I use the same step remove the -m64 flag.
      $ pwd
      /tmp/cld2/internal
      $ cat compile_libs.sh | sed 's/\ \-m64\ //g' 1> compile_libs_32bit.sh
      
    4. To make a standalone cld2 executable, again I followed the steps from Slackware script example. I made following changes to my copied compile script. Here’s a diff of what changes I made from compile_libs.sh to my custom compile_libs_32bit.sh script
      https://gist.github.com/visitsb/8affec514ef5829c6bd0/revisions
    5. That’s it! Now compile_libs_32bit.sh is ready to build a standalone cld2 executable on your machine. It is just a matter of executing your custom compile_libs_32bit.sh script now
      $ chmod u+x compile_libs_32bit.sh
      $ ./compile_libs_32bit.sh
      
    6. It takes a few mins to build, and voila, you have a standalone cld2 executable built, and installed on your machine.
      $ which cld2
      /usr/local/bin/cld2
      $ echo "Hello World こんにちは γει? σου" | cld2
      ExtLanguage Japanese(35% 3904p), GREEK(33% 1024p), ENGLISH(27% 1194p), 45/43 bytes of non-tag letters, Summary: Japanese*
        SummaryLanguage Japanese(un-reliable) at 8391021 of 43 562us (0 MB/sec), (null)
      
    7. For the record, here is what get’s installed
      $ which cld2
      /usr/local/bin/cld2
      $ ls -l /usr/include/cld2/*
      /usr/include/cld2/internal:
      total 52
      -rw-r--r--. 1 root root 28159 Jun 20 17:49 generated_language.h
      -rw-r--r--. 1 root root  5839 Jun 20 17:49 generated_ulscript.h
      -rw-r--r--. 1 root root   945 Jun 20 17:49 integral_types.h
      -rw-r--r--. 1 root root  8326 Jun 20 17:49 lang_script.h
      
      /usr/include/cld2/public:
      total 24
      -rw-r--r--. 1 root root 14850 Jun 20 17:49 compact_lang_det.h
      -rw-r--r--. 1 root root  7056 Jun 20 17:49 encodings.h
      $ 
      $ ls -l /usr/lib/libcld2*
      -rwxr-xr-x. 1 root root 6457627 Jun 20 17:49 /usr/lib/libcld2_full.so
      -rwxr-xr-x. 1 root root 1742462 Jun 20 17:49 /usr/lib/libcld2.so
      $ 
      

    Hope this helps someone, and kudos to cld2 for being awesome!

  • Undermining what makes things work

    Undermining what makes things work

    Computers used to improve our lives, automate seemingly mundane work, enable much done with less is omnipresent. As with anything commercially successful, software too has received careful scrutiny, tons of knowledge, plethora of suggestions on how to keep making software better. Probably many of us are aware, software industry is one of the most exciting, most confusing commercial domains of our age.

    My own experience attempting to do my startup has provoked me to the depths about thinking what could be right balance of software as technology forerunner for my company, against company as a managed organization to continue improving the technology. I ponder on questions like, should management be well-versed in technology, or should management be just good, plain old management? But then, how does management perceive managing technology different to any other case for management, say automobile industry? Do I favor a hands-on, roll up your sleeves approach (and attitude) from my management team? Or do I allow managers to carefully handle the veil between upper management aspects, to low-level technical, trench oriented projects?

    To me, the answer is very elusive, and not one that may have a definite answer best and safest weight loss pills. I’ll find out, but there is one small detail that I have some impressions about.

    Technology projects about improving the existing technology are not new. Almost every organization having an IT division has all of their technology projects about improving the existing ones. As it goes with everything else, most of these are assessed with one primary reason – the cost. Large technology changes, cost of maintenance, technology work owned, but essentially maintained by vendors, value proposition to any emerging business … in short, how much do we get over how much we give. That is an interesting aspect, borrowed over from commercial world of business. While there are reasons why such kind of monetary evaluation precedes technology value, the issue gets clouded when technology becomes almost secondary to any company’s growth preference, money comes first (and foremost).

    Amidst all this progress, I keep wondering about the value of existing technology.  To me the clear question is how do organizations perceive the existing systems value? How much of current technology can be written for future? By how much – 5 yrs, 10 yrs, 1 yr? Would we, as users, creators of the technology have the same unchanged need when we are in the future?

    Overall, how do you really evaluate technology improvements when one is unaware to the fact that it is current, existing technology which has led to this situation where you can think about improvements?

    I feel that most of us undermine working technology, over ideal one. Ideal one, if exists, will only come by when you stop looking any further. Working technology, however, is real, error-prone, touches you now. Improvements, not just in technology but anywhere, happen when we start asking the right questions to right problems. The other option is to keep yourself shrouded in veils, pretending there is a problem your teams are solving making few more along the way to improvements.

  • 2013 was a great year

    I wish 2013 has been a great year for everyone. It was an important year for me after all.

    As a final accomplishment for this year, I just launched a fun chat service called hi5 on https://www.5w1h.co/.

    5w1h gives you a single place to chat with all of your friends on different networks, most of whom you can only talk when you sign in to that particular network, say Outlook or Google. 5w1h is fun service created to add a twist of fun, a dash of simplicity, and a mix of colors to your chats. It was designed to work with all popular social networks and makes it easily possible to chat with your contacts across your networks from a single place.

    Thanks to my wife Elena, my kids Amrita, Nicolas for putting up with alot of patience towards my long days, late nights working on this.

    I wish everyone a very happy new year 2014 with alot of energy, happiness, and luck!

  • Simplicity is the ultimate sophistication

    Software is an art. Simplicity is the ultimate sophistication. Project SNOWFLAKE is an experiment to prove this through beautiful, minimalistic user interfaces. By suggesting alternate designs to some of most popular sites, this project wants to raise awareness on quality, beautiful, elegant human user interfaces.

    In many ways, this project is a bold attempt to refine our taste in software. Whenever I come across applications that mock a cockpit at best, I wonder if many of us really do appreciate good, quality software – especially the user interfaces? Quality on user interface can instill a sense of pleasantness, and many of us instinctively know that this year 2013 has been marked pronouncedly by responsive user interfaces.

    As part of SNOWFLAKE, I target some popular sites, some topics as a part of this independent experiment. Not surprisingly, meeting the basic requirements of any device, any browser support is no longer an issue. The time is perfect for fully responsive, and typographically adaptive user interfaces. Yet surprisingly, it is not a norm yet. It will be evident that fluid, beautiful, elegant user interfaces will prevail over geeky, complicated designs. Snowflake’s showcase demonstrates how elegant a user interface can be while retaining all of it’s usefulness. As is often the case, many beautiful user interfaces do exist. Some notable mentions will continue to get added in showcase, and I am delighted on this personal pursuit on beauty in human user interfaces, both spiritually and intellectually.

    Through SNOWFLAKE I strive for beauty conveyed through human software user interfaces, endeavoring to promoting human user interfaces towards a status equal to that of art or music.

  • Template – HTML5, CSS3 ready Responsive Web Design Page

    I ended up creating a simple Template.

    After working yet again on my third website using responsive web design, I saw that I badly needed a template. A simple drop-in skeleton template that I could use as a kick-start, and focus on content instead of structure.

    There is a bevy of instructions, usage, gotchas when correctly implementing responsive web pages. Instead of reminding myself of any such lessons, I thought it’s best to start with a base, and over time keep adding important bits to it.

    It was tempting to create a template with _all_ of the best out there. But I chose to stick with the essential, most importantly stable, and practical template I could figure out. The project is on Github, and I look forward to improving Template with an aim to provide standard, clean, responsive websites.

    Happy responses!

  • Underestimating software

    My wife shared an interesting thought this morning over breakfast. – Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. If theory and practice are combined, nothing works and nobody knows why.

    Over countless experiences, shared by many people, big or small, IT is still an art. What’s more, it is still an art far from perfection. IT, in particular software done in large organizations, has this syndrome. Many good things get done, yet large IT projects “always” fail. I’ve yet to come across a successful project. There are many good projects, Apache Software, Google and like, but in as much I think about it, these haven’t grown successful overnight. Part of the impression of success is adoption, and proportion of people who are affected, in other words, who depend on using it.

    I may not be touching on something yet unknown here, but just how many methodologies, principles, guidelines, theories, languages are out there? And of those, how many have got you really convinced to solve your IT problem guaranteed? Is it MSF, or TOGAF, or ITIL? Is it AGILE or LEAN or TQM? Is it proprietary or open source? Is it platforms, or home-grown? Is it C# or X-Code? Is it OOP or Procedural Programming? Is it Russians, or Indians? Is it lack of documentation, or is it lack of requirements articulation?

    Let’s consider for a moment the ideal state – person A has a problem. Person B offers and gives person A a tool to solve the problem. Person A is happy. Problem solved. Person B too is happy. Imagine this for a moment with a simple need; let’s say lunch in a restaurant. You’re hungry, you go to a restaurant, order, eat, feel satisfied, and offer compliments. Your need is solved; both you and the restaurant are happy.

    Is preparing food a new thing? Is your way of ordering different? Did you have to think much over this, except perhaps decide on what you want to eat? Or buying clothes? Okay, maybe one can go far with this, and say there is always customization – order made dishes, or order made suits, or order made software.

    Why is building useful software fast so difficult in our organization today? Will we ever get something done quicker? I may be completely wrong, but problem is not in the way we are doing things, it’s the approach with which we are planning to do our next steps. And this is not as easy as saying follow MSF, or TOGAF ADM, or ITIL “IT as a service” ideas. More than often, these are confusing to begin with, get understood differently, pushed unnecessarily, and carried over as a burden to the next guy heedlessly.

    Today more and more of us are working for software, instead of the other way round. Not to say, the practice is when everything works but nobody knows why seems to come more frequently than I had anticipated. Ever changing need to change something, time-pressures, individual ambitions all seem to have a coherent effect on just making things further complex, not simpler. And when things go to that stage, it’s a point of no return, people continue that as if it was a household chore cleaning dishes – you don’t get any fun, or see anything of value in it. Soon it becomes a “that’s the way we do things around here” common-sense and everyone gets by this, leaving a yet subtle roadblock to overcome before we begin thinking about other problems. Not to mention the problem further aggravates by pure Know Nothing Know it All theorists.

    Okay, where am I going with this? I admit it’s all over the place without making a concrete point.

    Still pondering if an old Chinese proverb “He who solves a problem with a problem will always have a problem in waiting” is what’s going on today?

    Without sounding too pessimistic or competent (out of my mind), maybe I’ll give this thought a try again soon …

    I sometimes get puzzled about requirements. If we could look into the future, and predict what would we need at a level of crystal clear clarity. then perhaps it would be a no-brainer to make a system out of it.

    Take another approach, how do I extrapolate (aka. read between the lines) when something is told as a requirement. This is different from offerring a menu of choices, and asking to pick one. Instead the other way round.

    Systems made by the same person who articulates his own needs are alot easier, simply because you are adaptive to both changes to your own perception of what you might need as it goes on changing when you see some degree of finished system.

    If a person is ‘cornered’ into a witness box with technical guys challenging with different questions, each of the question feels random. These are not on the same wavelength, or if they are someone is making some wild assumptions about what is understood. How does one get two people thinking on the same plane of thought? How can this new level of information be captured right at that moment, yet not set to stone to make un-needed commitments? Why have all methodologies, tools, approaches not succeeded so far? Why is there no ‘right’ answer?

    By ‘right’ answer, I allude to becoming ‘one’ with the purpose, the intent of the author. There are no more questions at this state, no doubts about the goal, no disagreements on outcome.

    Has software taken too much influence of traditional assembly line, one step follows the other, typical & traditional silo process? How do we foster ‘creativity’ without a process? Part of getting better over our past ways of doing things is to also improve upon the way we did things. But the fundamental approach hasn’t changed at all. We still ‘see’ IT development process in it’s strictest sense of – planning, designing, developing, testing. Some of that makes sense, but altogether sometimes it makes no sense, especially when software in itself has made signifcant strides to make or break something too fast, too easy. We have create too much ‘fat’ over the actual process of ‘making’ something under the pretext of entire governing organizations, support, organizational layers, vendors and so many other non-essential things.

    So what do I mean by all this? Do I mean we don’t need project management? Do I mean one can just start building things, and let the plan evolve as the things get built? Do I say governance is counter productive to it’s purpose?

    Before I can answer that, to me it appears that something simple as a software program has been now so deep biased with tradtional laws of doing things ‘right’, that no longer does one try to break out of the ordinary. Throw in whatever new idea (software, technology, automobiles, …) in the world, the moment it is commercialized, businesses spawned, living made, it’s already become a big wall of people to get through ‘before’ an actual user, and the software can actually talk to each other.

    People tend to always favor ‘gray’ zones. Something that is not yet known, needs improvement. In somewhat same way, what I am trying to discover through this thread is same thing. I am discontent with how software is treated, how much non-essential information, structure is built around it, and just how much waste of time it’s become ‘customary’ under name of meetings, updates, releases, bugs, analysis without hardly anyone being able to break a drop of sweat.

    The best way is to ‘play’ the game for time being … but I sense something fundamentally needs to change in this game eventually.

    A plausible summary that agrees with one argument I have. Borrowed from http://codebetter.com/gregyoung/2013/03/06/startups-and-tdd/

    I wanted to write a few comments about TDD in startups. Good code is the least of the risks in a startup. Sorry but worrying about technical debt making us go slower when we have a two month runway and likely will pivot four times to quote Bob.
    Captain Sulu when the Klingon power moon of Praxis exploded and a young Lieutenant asked whether they should notify Star-Fleet: “Are you kidding?” ARE YOU KIDDING?
    One of the biggest mistakes in my career was building something appropriate…

    It was just after Hurricane Katrina. I was living in a hotel. An acquaintance asked me if we could hack together this business idea they had for a trading system. He had the knowledge but not the know how. I said sure, hell I was living in a hotel!

    In less than two weeks we had an algorithmic trading system. It was a monstrosity of a source base. It was literally a winforms app connected directly to the stock market. UI interactions happened off events directly from the feed! Everything was in code behinds (including the algos!) Due to the nature of the protocol if anything failed during the day and crashed the app (say bad parsing of a string?) the day for the trader was over as they could not restart.

    But after two weeks we put it in front of a trader who started using it. We made about 70-80k$ the first month. We had blundered into the pit of success. A few months later I moved up with the company. We decided that we were going to “do things right”. While keeping the original version running and limping along as stable as we could keep it while adding just a few features.

    We ended up with a redundant multi-user architecture nine months or so later, it was really quite a beautiful system. If a client/server crashed, no big deal just sign it back on, multiple clients? no problem. We moved from a third party provider to a direct exchange link (faster and more information!). We had > 95% code coverage on our core stuff, integration suites including a fake stock exchange that actually sent packets over UDP so we could force various problems with retry reconnects etc/errors. We were very stable and had a proper clean architecture.

    In fact you could say that we were dealing with what Bob describes in:
    As time passes your estimates will grow. You’ll find it harder and harder to add new features. You will find more and more bugs accumulating. You’ll start to parse the bugs into critical and acceptable (as if any bug is acceptable!) You’ll create modules that are so fragile you won’t trust yourself, or anyone else, to modify them; so you’ll work around them. You’ll build a festering pile of code that, with every passing week, requires more and more effort just to keep running. Forward progress will slow and falter. It may even reverse as each release becomes buggier and buggier, and less and less stable. Catastrophes will become more and more common as errors, that should never have happened, create corruptions and damage that take huge traunches of time to repair.
    We had built a production prototype and were suffering all the pain described by Bob. We were paying down our debt in an “intelligent” way much the way many companies that start with production prototypes do.

    However this is still a naive viewpoint. What really mattered was that after our nine months of beautiful architecture and coding work we were making approximately 10k/month more than what our stupid production prototype made for all of its shortcomings.

    We would have been better off making 30 new production prototypes of different strategies and “throwing shit at the wall” to see what worked than spending any time beyond a bit of stabilization of the first. How many new business opportunities would we have found?

    There are some lessons here.
    1) If we had started with a nine month project it never would have been done
    2) A Production Prototype is common as a Minimum Viable Product. Yes testing, engineering, or properly architecting will likely slow you down on a production prototype.
    3) Even if you succeed you are often better to stabilize your Production Prototype than to “build it right”. Be very careful about taking the “build it right” point of view.
    4) Context is important!

    Never underestimate the value of working software.

  • Fixed footer auto height content

    While redesigning one of my pet projects – Simplememos – I came across a very pesky issue on layouts. I wanted a simple header/content/footer layout, with some specific behaviors. Header is part of content, content is using up 100% of whatever area is left after positioning footer fixed at bottom. Sounds easy? Well, what if the content has nothing inside? Yes, all of my content inside was absolutely positioned ‘notes’ that meant content had nothing by which it can auto-size itself! I also needed to have proper visual layout for all screen sizes, handle browser resize, maximize, restore for ALL browsers.

    The simple answer, yes I have that layout. Take a look at Simplememos, and feel free to play around in any browser of your choice.

    From what I experienced, played around, researched, and figured out eventually was that there is no CSS way of doing this. My own philosophy is to write less, and let CSS handle the details of my layouts as I have specified. But as many seasoned web players know, browsers come with all variations of CSS interpretations, and then you see all those threads on stack overflow around layouts, you’ll definitely have some fix, but then again it won’t work for your specific case if you had like the one I had.

    I can’t say stack overflow wasn’t of any help. In fact, I gathered many useful points from across a variety of stack overflow posts, replies, code snippets, and eventually I came up with a version that works for my own needs.

    My implementation was quite simple, but is a mix of CSS and JS. I couldn’t avoid JS, unless I go with CSS expressions which in a nutshell is JS to me.

    I created 2 regions as below-
    <div id="content">... my header, content inside ...</div>
    <div id="non-content">... my footer here ...</div>

    My div#non-content auto expands to my footer, since footer is a text. However, my div#content has only absolutely positioned divs, so it has no auto-expand criteria as such. Thus, I jig up jQuery’s document.ready event to dynamically resize div#content

    $(document).ready(
    div#content's height= (browser window's height - div#non-content's height)
    )

    And I also hook the window’s resize event to do the same dynamic height change again.
    $(window).resize(
    // Same code to dynamically resize div#content to occupy 100% of remaining available width
    div#content's height= (browser window's height - div#non-content's height)
    )

    There are some additional things I could have done
    1. Include a spacer image img#spacer-height that is hidden but dynamically resized to occupy maximum available height, and another image img#spacer-width to occupy maximum available width. I have used this for IE6 and below one time to handle overflow:auto for div#content. Without any content, setting width, and height is OK, but IE6 never bothers to show up any scrollbars.
    $('.spacer-width').width= window's width
    $('.spacer-height').height= (window's height - div#non-content's height)

    2. Use tables! This would have been the best option, but sadly FF does not support overflow:auto’s in table cells. As alternatives I didn’t bother to emded div’s, because I had already got to the point where I am having tables, divs, CSS, and JS to resize the div anyways.

    You can see my source in action on my own Simplememos site. If you have pure CSS alternatives, I’ll be more than happy to hear!

    Aside from the above, I must say the experience of using Knockout JS, custom KO bindings, jQuery, and extending jQuery selectors was a rewarding experience. I have these, and quite a number of stack overflow posts open for quite a few months now.

  • Comparing C, and Java

    I never learnt C. I learnt Java. And I was glad I started with Java as my first real programming language (well my first one was Visual Basic actually).  I appreciated the OOP, and at some point got overwhelmed with jargon. I decided to switch back to C, a language that has lasted decades and yet is easy to begin with, get’s obscure as you start getting deeper.

    I thought it would be a nice idea to do a very simple comparison of these 2 powerful languages. It becomes easier to see how things fit together in either world, yet once you understand their similarity, it’s the same language in it’s roots.

    C Java Comparison