Codekana Blog

Updates & general news on Codekana

Archive for the ‘misc’ Category

Codekana is now free, and will soon be open source

Monday, October 10th, 2011

The editor component in VS2010 was completely new. Plug-ins that need to get to the ugly, deep bowels of Visual Studio, such as ViEmu and Codekana, needed rewriting. We rewrote ViEmu, which was a large project. We got started rewriting Codekana, actually with a much more ambitious goal rather than just a straight port. New, user-customizable parsing and display technology, this could turn Codekana into a really powerful and popular product.

After a few months of work, it became clear the goal was too ambitious. It would take years, not months. Given our current status, it had to be postponed, to a moment where more resources are available.

We evaluated doing a simple, straight port instead, but it didn’t make much sense. Codekana wasn’t a stellar selling product before VS2010, and there was no reason to think this would change radically without a radical redesign. A straight port was still a large project, and it just didn’t make business sense.

But there was an existing customer base, and I didn’t want to leave them stranded.

For the past months, I haven’t been able to give Codekana the attention it needs and deserves. The most salient issue is that we haven’t been able to release a port to VS2010. After giving it a lot of thought, I decided that the right thing to do is to make it open source, so that anyone can help out, and so that existing users can be supported or can even self-support themselves. This is as much an experiment as anything else.

As the first step in going open source, I have removed the license-key checks, released a new version (1.5.5), and removed the purchase options from the web site. So now, Codekana is effectively free to use.

In the coming weeks, I hope to clean up and tidy up the codebase a bit, chose the license, and upload the source code somewhere. I will be grateful for any suggestions and advice regarding the technicalities of this: source code control, public repository, collaboaration infrastructure, etc… I usually use Subversion, but I will start a new repository from scratch, and I’m happy to use something else if it will make things easier for everyone.

I will be more than happy to help out those interested with a tour of the codebase, answering questions about it, etc… Codekana includes a multithreaded asynchronous job system (used for background parsing), an incremental C/C++/C# parser, complex text-changeset processing logic, custom drawing, and deep interaction with VS. It can definitely be unwieldy at times.

So open source it was.

I’ll announce the availability of the source code here, and meanwhile, let me know what you think!

The prospect for Codekana / VS2010

Saturday, September 11th, 2010

“When will Codekana for VS2010 be ready?” We get this question quite often, so I thought I’d post up a notice up here prominently.

We have actually started work in Codekana for VS2010. The main point is that, VS2010 has a completely new text editor, and this means we had to rewrite a large part of Codekana from scratch (as we had to do for ViEmu, which is already released). This is a lot of work, but also, since the new editor is so much more powerful, we decided to take full advantage of it.

We decided to go back to the drawing board, and design a completely new Codekana. This new tool will be much more powerful and flexible than the previous one: very user-configurable, and working with many languages, rather than just C/C++/C# and the like.

It will also make use of all new capabilities of VS2010: varying text sizes, graphics, bitmaps and controls embedded in the text, etc…

We expect we will be able to have a first release by the end of 2010 or so. Please watch this blog and my Twitter stream (@jonbho) for beta and release announcements.

New Codekana 1.5 pre-release: module definitions pop-up with quick-goto

Thursday, April 22nd, 2010

The new Codekana 1.5 for VS2002 to VS2008 is ready for download:

http://www.symnum.com/latest.html

(Symnum Systems is the new incorporated name of the old NGEDIT Software, be ready for a full phase-out of the old name everywhere.)

Before anything else: this version doesn’t support VS2010. We’re working on it, and we hope to have something in 2-3 months. See at the bottom for more info regarding this.

The main new feature in this version of Codekana is that it brings a new command with great functionality: Quick-Goto with a Module Definitions List. By default mapped to <Alt-.>, but you can map it to whatever you want, it will pop up a list with all the functions and definitions in the current module, with their full declaration and the symbol itself in bold. If you type in part of a name, or several parts of a name, it will filter the list down to those symbols that match the parts you’ve typed. After this, you can move up and down with cursors, and pressing <Return> will take you to that function in the module. Here it is for you to see:

quick-goto

I hope you will find this a very useful feature.

I’m doing a pre-release before the official release, because this release incorporates the new license-key system, and I want to roll things out slowly to test everything. The link above has all the information on how to obtain the new-style license key. If there is any problem, just get in touch with me via email or using any of the support forms. The official release will happen over the next few days.

We are working in ViEmu/VS2010 first, but the main reason behind the wait is not that, but the fact that we have done a “product reboot” for Codekana/VS2010. The new version is a re-start from scratch, a redesign, with a new vision and direction. Moving from native C++ and COM to managed C++/CLI and .NET has been quite a challenge, but the great news is that the new editor model in VS2010 will allow us to do much, much more awesome things that the previous VS editor ever did. We are already doing some very innovative and incredibly useful things, and we just have to put in some more work to have it in a state that can be released. We are innovating in everything from parsing to visualization to functionality, and we think the new Codekana VS2010 will make quite a few people have a look at what we’re doing. Watch out here for news, or follow me on Twitter.

New Redesign for the Codekana Web Site

Thursday, September 24th, 2009

If you surf over to http://www.codekana.com, you will see that the whole page (including this blog) has been completely redesigned. It includes new information like customers testimonials, an animated GIF on the home page showing how Codekana works, and a clearer and more complete presentation of all the information. I hope it will help people understand the product faster and with less effort. Let me know what you think about it!

I also posted on the NGEDIT blog to announce this new design and detailing the reasoning behind it. There is also some interesting info on that post about the current state and next steps of my different projects, including Codekana, so you might find it interesting:

New Codekana web site, preview of the J1CK Twitter client, and next steps

Codekana customers around the world

Thursday, September 10th, 2009

I’ve been meaning to do this for some time now, and I’ve finally found the time to put it together. I’ve digged up the php code I used for the ViEmu around the world map over two years ago, entered the Codekana customers data, and got a nice map courtesy of Google maps.

An overview of Codekana customers around the world:

World

This time, unlike in the ViEmu map, America is in the center. Just so that I won’t be accused of any favoritism!

See here the far east Asia and Oceania:

 Asia and Oceania

Google’s geoloc is especially poor in this part of the world, so several sales often coalesce into a single point in the middle of the city (and I have to remove the street portion of the address manually, or the geoloc just doesn’t return anything!).

We also got Europe fairly covered:

Europe & Surroundings

Google’s geoloc for the UK is quite improved since the last time I did this – it just didn’t work back then.

And finally, North and Central America. You can see the US is where Codekana is most popular – or maybe it’s just the higher concentration of developers?:

America

I find it curious that the tech hotspots in the West half of the US are so concentrated in California and Seattle,  while sales in the East half seem much more evenly distributed. There are probably more sales in California and the Seattle area than all the others together, but it’s not apparent on the map because they are so heavily concentrated.

These maps are neither complete nor totally precise: Googles geoloc isn’t always able to find the (lat,long) pair for an arbitrary address, and I haven’t used the data of all Codekana customers. If you are a customer and your town or country is not marked, please understand that is the cause! I just wanted to show that Codekana is widely used around this beautiful little planet.

I am currently working in a redesigned web site for Codekana, which will likely be ready in just a few days, and a new version of Codekana itself with several improvements is also on the way. The plans for late-2009 and early-2010 are to provide new versions with hugely improved functionality, both for Visual Studios <= 2008 and for the new Visual Studio 2010.

To all Codekana customers: thank you so much for relying on and supporting Codekana. And I hope to give back to you by providing new functionality to make your daily coding easier and more enjoyable!

On the Speed of Light, Innovation, and the Future of Parsing

Thursday, April 2nd, 2009

(Or, how naïve early-twentieth-century physicists look today,
and how naïve we will look to twenty-second-century developers)

Michelson-Morley Experiment Notebook I’ve always loved the popular story about how early 20th century physicists thought physics was basically “done”. There were just a couple of minor phenomena lacking an explanation, and it was widely believed that they would soon be fully understood. The whole building of physical law would be finished in a matter of years. These “minor phenomena” were the photoelectric effect and the apparently constant speed of light regardless of the observer, as shown by Michelson and Morley in their famous experiment.

Of course, we now know that explaining these phenomena would give rise to relativity and quantum mechanics. These theories questioned all previous physics, and made us start to actually see the enormous complexity of our universe: it’s not even clear that we’ll ever be able to fully understand it.

In some ways, I feel the computer and software industry is in a similar state. I despair when I see this short-sighted attitude in many developers and computer scientists, despite how primitive our current understanding is. Bill Gates is said to have lured David Cutler into leaving Digital and joining Microsoft to lead the Windows NT effort by asking “Don’t you want to write the last major OS kernel?” Purely as a hiring move, it was obviously a great success. But unfortunately, to many in the industry, current kernels are the last step of the road. Ouch. It’s not only that it’s wrong, but also that this belief impedes original thought and progress.

What we now have are just the first babblings of software – clumsy languages, systems and architectures, poor tools, and primitive theories, which are guaranteed to have an expiry date in the not-very-distant future. It’s sure we won’t be using the current theories and processes in a matter of a few decades.

I recently read an otherwise educated developer remark: “parsing is basically a solved problem”. He probably knew about parsing, maybe even in some depth: recursive descent, LR parsing, lex and yacc and their multiple descendants… But he must have been unable to see beyond that. It’s the only explanation for his affirmation, because I think nothing is further from the truth.

These are the very early days of computing. We have just started exploring the shore of a new continent, and it’s shocking, because many are already proclaiming that it’s over. But the good news is that there is plenty of unconquered territory, most of the settlers have their eyes covered, and they’re not even attempting to grab any of it.

An Itch to Scratch

(Or, a vision, and the twisted path to its completion)

I am somewhat obsessed with visualization and user interfaces: anything that helps you better understand what you are working on, and which helps you create. And for better or worse, all my product ideas are born from an itch I need to scratch myself.

Back in early 2005, I thought it would be awesome to have the following visualization of code while I was editing it:

Early Codekana Design

The goal was to understand at a glance the structure of the code and its control flow. I wanted to see the name of functions and classes highlighted in their definition (the syntax of C-derived languages is notoriously horrible in this regard, hiding the name among a lot of clutter). I also wanted colored regions marking code blocks, visually grouping the code. I wanted to be able to see where everything starts and ends, and see right away the type of each block: red for loops, green for if-blocks, and so on. Statements that break the normal flow of a loop (break, continue) should stand out. All this had to be done in real-time, inside the text editor or IDE, while the code was being edited.

You could call this advanced syntax-highlighting. Actually, what syntax-highlighting usually refers to really isn’t such, because it doesn’t use any syntactic information to color the code. It usually just uses lexical information: the distinction between keywords, operators, numbers, constants, identifiers… Very few tools use any syntactic information for display purposes. And, sadly, this is not done because of a design choice, but because lexical analysis is so much simpler to do.

This feature is the kind of thing that once imagined, you can’t take your mind off. At the moment, I also thought it was completely innovative, as I hadn’t seen it anywhere else. This was one of the features I decided I would include in NGEDIT, the text-editor I was working on. There were many other foundational parts of the editor that still needed work, so I didn’t actually start implementing anything.

ViEmu Logo Some time after that, my development effort deviated a bit, as I started working in ViEmu, my vi/vim emulator for Visual Studio, Word, Outlook and SQL Server. For ViEmu, I actually adapted the vi/vim emulation core I had written for the editor, so that it could work inside these host apps, and released it as a stand-alone product. I focused exclusively on the ViEmu products for some time, turning them into solid, working products.

During this almost-exclusive stint in ViEmu, I stumbled into a few cases of features similar to the one I had envisioned: Jacob Thurman’s Castalia, which implemented something similar for Delphi, DevExpress’ CodeRush, with a similar version for Visual Studio, and also some scripts for lisp display in emacs. They called it structural highlighting, a name I don’t like much, but which sidesteps the mix-up with syntax-highlighting. Still, none of them did the kind of control-flow-based color-coding I wanted, which is where it becomes really useful information.

Fast forward to early 2007. ViEmu sales were slow for the first few months of the year. I was living off of this, and savings were not huge, so I decided I’d implement another product to try and complement the lousy sales. I had now renamed the text editor to kodumi (a much nicer name if I can say so myself), and it was still work in progress. Actually, it had turned into a major research project that would (and will) still take a long time to complete. If I was going to arrive somewhere shortly, I had to tackle something more manageable.

So I did some digging in the drawer of project and feature ideas. Among the many crazy, undoable concoctions, I came by the “control-flow outlines” idea. On one hand, I still thought it was a cool feature that would really improve the visualization of code. On the other hand, after ViEmu, I had become very familiar with extending the Visual Studio built-in editor (short version: you’ll have to sweat blood to make it do what you want). Together with a few other useful features, it could offer a nice addition to VS. It made sense, and it seemed to be within reach. That would be it.

Codekana Logo Inspired on “kana”, the Japanese name of reading and writing systems, I decided to call the product Codekana. And I set out to write it.

Such a project involves many tasks. There are core technology elements, interaction with the host application, and many more. At the beginning of any project, I like to evaluate the difficulty of each challenge. Adding extra elements to VS’s text editor window was clearly doable: other add-ins were already doing it, and a quick prototype helped me prove it. Filled background regions turned out to be nigh-impossible, due to VS’s internal text-rendering system, but I could draw lines to represent the blocks. This kind of hacking is never simple, or nice, but for an experienced Windows developer, it was a matter of putting in the hours. This was not the hard part.

Even if it is a solved problem to some, it was obvious to me from the first moment what the really hard part was.

It was, of course, the parsing.

On a Whim

(Or how perfectionist thinking can sometimes get you somewhere worthwhile)

I will gloss over the difficulties of parsing C/C++ and C#, which is not an easy job. I will also skip describing the task of implementing a responsive background parser using a multithreaded task-engine. I will even withhold from whining about the difficulties of getting Visual Studio to do what you want.

What made everything else seem simplistic in comparison is that, unlike existing tools, I was determined to properly support the free-form editing we do while coding. What do I mean by this? Have a look at the following typical editing session (the cursor is shown in red, and newly inserted text is marked in green):

Sample Editing Session

Sample Editing Session

This session starts from a grammatically correct version of the code, and the source is gradually modified – in this case, just by typing. The end result, on the right, is also grammatically correct. But the intermediate version, when you have typed the new code only partially, is incorrect. In this incorrect version, braces don’t pair properly. There is a new opening brace, which still doesn’t have a matching closing brace. The question here is – what should the Codekana parser do with this version of the code?

Traditional parsing starts from the top and scans linearly. Braces are paired according to a simple logic: a closing brace is understood to close the most recent opening brace that hasn’t been matched. This method works wonderfully in correct source code. But if applied to the intermediate version above, the result will be a surprising. See what pairings this scan will find (in green), and what brace it will leave unmatched (in red):

Erroneous Code Wrongly Parsed

Naïve linear rescan, starting from the top

Of course, this source code can’t be fully parsed properly, as it’s incorrect. But the parser has detected some pairs and these could be used for display or other purposes. The outline rendering could use the pairs, except that it’s a key problem that the pairs it has found are wrong, they are completely different from the ones found before the typing, and they will be wrong again after the closing brace is typed. A Codekana-like display will get very confused with this intermediate parse.

All traditional tools rescan in this naïve way. Regular compilers of course work this way: they work in “batch mode”, and have far less information than any dynamic parser. If you try to compile the above incorrect code, most compilers will just flag the function body’s opening brace as unmatched. And if there is another function afterwards, the error can be as obscure as “nested function not allowed”, given the compiler will still think we’re inside foo’s body (and, technically, we are).

But even newer interactive-mode parsers, as the ones used by Visual Studio’s own Intellisense, or even JetBrains Resharper’s or DevExpress CodeRush’s parser, will also fall into the same trap. The technique they use for dynamic parsing is “throw everything away and restart”. To avoid extra work, they probably start at the current function or near the editing point, but they don’t go much further than that. In the case above, some of them will even flag as incorrect all code below the editing point. Apart from the erroneous feedback provided, the extra work of re-parsing the rest of the module twice, first when entering the opening brace, and then again when typing the closing one, is nothing to laugh about either.

Of course, they are aware of the problem: an article on Resharper’s blog even complains about C-like languages’ syntax when compared to Visual Basic, where delimiters such as “End If”, “End Loop” and “End Function” allow better parsing with little effort: Of tools and languages.

To alleviate the problem, some of these tools help address this by automatically inserting the matching closing brace, parenthesis or bracket when the user types the opening one. This ameliorates the issue, but it’s not a real fix. Other editing operation sequences can create similar situations, which cannot be fixed with simple hacks.

I knew this was what usual techniques afforded, and that this was what current tools were doing. But I had a clear idea of what I wanted to do instead: I wanted Codekana to understand the situation. When re-parsing the code in the case above, the new opening brace shouldn’t pair with the next pre-existing closing brace – it is much more likely that the previous pairings persist, and so the parser should keep them. Here is a screenshot of how I wanted Codekana to treat this case:

Erroneous Code Properly Understood by Codekana

Codekana correctly flagging the new, unpaired brace

It’s the new opening brace that is flagged as unpaired, and the previously existing pairings are assumed to remain valid. Codekana also knows that the source is incorrect, but it can apply this heuristic to flag the brace that is most likely the culprit, rather than the one that the naïve linear scan would pinpoint.

The heuristic is the following: assume that it’s the new brace that is unpaired. This is the case most of the time, and applying this logic is the only way to give the correct information to the user. It is also much better than what any other tool out there does.

In the general case, what I wanted was for the parser to use all the information from previous states to understand the editing better, and to better interpret any version of the code. The simplest case of this is what is shown above: flagging the new brace as unpaired, instead of the pre-existing one. But the lead was promising and I could envision more advanced use cases. Code is often incorrect while it is being modified, and the developer still needs all the help he can get. And I’m not only talking about simple non matching-braces: when we ported Commandos 2 from the PC to the PlayStation 2, optimizing memory usage took months, and for extended periods of time, full branches of the one-million-lines-of-C++-code codebase didn’t even compile. Also during regular editing, the code probably spends more time in an incorrect state than in a correct one. Being able to apply refactorings and other complex operations while the code is partially wrong can be very useful.

It’s not only that it works, but that it’s the right thing to do. Enhancing the tools so they can better understand what is happening can only be a net win.

Uncharted Territory

(Or, once you know where you want to get, how you find the way there?)

As with any spec, there are different ways to implement it. Codekana could just try to fully reparse the module and not accept the new text fragments until the whole module parses correctly, flagging these braces as unpaired. This would work, but it’s an ugly hack, it would require a lot of redundant parsing, and wouldn’t work in all cases. But mainly, it’s just not the right solution – it doesn’t scale to higher levels of usefulness.

I did have a look at standard research sources. I knew, because I’m familiar with them, that traditional books on parsing like The Dragon Book or other standard texts on compilers don’t even get close to covering this kind of scenario. What I was trying to do is a form of incremental parsing: reusing the previously existing parse to parse a newer version of the same source.

I had a look at what was available on incremental parsing, and found a few research initiatives, and even mentions of a few products. But it turns out none of them was focused on the kind of problem I was trying to solve. They were either trying to improve the performance of compile times, thus studying incremental analysis between correct versions of the code, or they were trying to study the formal mathematics of incremental parsing, which is a cool thing, but is not very practical. Their approaches would have analyzed the above incorrect source in the naïve and wrong way, the same as current tools do, only they would take much less time to do so. It’s not only important that the parser re-uses existing info, but how it re-uses it.

It was clear I had to design a completely different parsing engine. No existing techniques would be of help. It turned out that by pursuing a clear user-oriented goal, I had deviated from what is considered standard knowledge in parsing theory and practice. It had only taken this to get into uncharted territory. The bad news is that opening new paths can require a lot of effort. The good news is that this is much, much more exciting than being the thousandth implementer of an algorithm from the 70s, and potentially much more rewarding.

So, how could this thing be implemented?

The key to designing and implementing any code is to design what data needs to be stored and how, and the operations that act on it. None of the usual parsing techniques would work for the above case, because none were designed to understand erroneous code.

There is actually a technique called “error productions”, which includes common erroneous code patterns in the grammar of a parser so that the compiler can report errors properly. But this technique does not understand the “ephemeral” nature of code as seen by Codekana in an editing session.

Wait, this was the key! All current techniques deal with quite stable code, while I wanted Codekana to understand how code evolves. Let’s see the parsing structures alongside each version of the code:

Sample Editing Session with Parse

Sample editing session with parse trees for the correct versions

With traditional tools, both the first and the last state of the code above have a representation as a “parse tree”, while the state in the middle doesn’t have a parse tree, because it’s incorrect.

The key realization was understanding that I had to extend the traditional parse tree to cover erroneous cases: a branch of the parse tree can contain an unmatched brace, that’s all. This never arises in traditional parsers, because they “experience” the code linearly. However, a dynamic parser experiences the non-linear timeline of editing operations, and thus a new, unpaired brace can appear, and should be inserted, anywhere in the tree. Until another brace comes later to match it, we’ll have to live with an unmatched brace:

Sample Parse Including Unmatched Brace

Unpaired brace, happily living in the parse tree

If a closing brace is inserted below later, both are detected to match and converted into a proper block.

This treatment extend to other cases: if the closing brace of an existing block is deleted, the opening one will be “downgraded” to unpaired-status, and all the remaining nested or surrounding blocks will remain properly paired. All common cases are understood and properly shown to the user.

The principle also applies to more complex situations: the parsing could understand partially-written declarations, function calls, expressions, or any other structure of the grammar. These partially-complete instances can live happily in the parse tree until they are complete, and can thus be properly treated.

I call this incremental approach “lead-based parsing”, because many of the parsing actions start from actual keywords and operators found in the middle of the code, instead of linearly from any given point. Actually, since incremental parsing has to be done wherever a change happens, the parser always starts “in the middle of the code” in the general case.

Once designed, there are many other intricacies in the implementation, but the actual quantum leap was realizing that a switch of perspective was needed in order to reach the design goal. And this switch took us to a completely new technique in parsing, one of the most studied areas of computing – all in the pursuit of creating a better development experience.

Conclusion

(Or, what does this all mean to you?)

At the end of July 2007, I was able to release Codekana 1.0. It included the incremental parsing system and many other features. It was a small and useful tool, and it will hopefully become much more powerful in the future.

Obviously, Codekana is not “the future of parsing”. It’s just a little tool that can make the life of Visual Studio developers easier. It has gone down a rarely-if-ever-visited path down the world of parsing, and it can offer a much better experience as a result. Actually, it has had to go down that path in order to offer that improved experience.

I do believe that it’s this kind of thinking which will get us to the next generation of computing as a whole. Especially in computers, most things are not even thought about yet, let alone solved and done. In computers, it’s the early 1900s, baby.

Take-aways:

  • Never assume that everything is done in any given field.
  • When you have a vision, stick to it even if you don’t know how to do it.
  • Know what the state-of-the-art is, so that you can use all available tools and you don’t duplicate work.
  • Decide the goals around your vision, not around what you can easily do, or you won’t be able to innovate.
  • To solve the part not covered by existing techniques, you need to understand the essence of what you are trying to do. But beware: it can take a huge effort to tell what the essence is, and what is just clutter surrounding it.

Innovation is impossible unless you see what is beneath the surface. You can only copy other people’s creations. But as you learn to see the underlying principles, see what you want to achieve, and stick to your vision, innovating becomes unavoidable.

What do I think is the real, actual state-of-the-art in parsing? It’s this: there are lifetimes of research pending to be done in parsing. Turing-award-winning algorithms and techniques to discover. Fortune-making products to be conceived and implemented. This is just in parsing – a small corner of computer science which has been studied to exhaustion. Don’t make me talk about less-explored corners.

Now, please, go, and innovate.


PS: And if you want to check out Codekana, you can start exploring here.

PPS: Thanks to Patrick McKenzie, Eric Sink, Ralf Huvendiek, OJ Reeves, Dennis Gurock, Tobias Gurock, Nick Hebb, Clay Nichols, Andy Brice, Doug Nebeker and Raul Ramos for their early feedback, which helped make it a much better article

Codekana 1.1.9

Thursday, January 29th, 2009

I’ve just uploaded 1.1.9. This fixes proper constructor highlighting in the following C# syntax, which wasn’t properly recognized:

        public Command(Jobs Job, int Percentage)
            : this(Job)
        {
            this.Percentage = Percentage;
        }

Now “Command” gets properly marked. The “this” in “this(Job)” was confusing CK’s parser.

You can download it via this link:

http://www.codekana.com/CodekanaVS-1.1.9.msi

You needn’t uninstall the previous version before installing this one, it will auto-upgrade your installation. Also, the main download link now actually downloads this version, so no need to update if you downloaded 1.1 after today.

Codekana 1.1.8

Monday, August 4th, 2008

I have just uploaded version 1.1.8 yesterday. This version fixes a freeze that could happen when doing a “Find in Files” operation with Codekana’s search highlighting active. VS would freeze and you’d need to kill it.

Here is the link to download it:

http://www.codekana.com/CodekanaVS-1.1.8.msi

You don’t need to uninstall the version you have installed, this will automatically upgrade the installation.

The bug happened very rarely, but funnily it would happen more easily when using a display with great height. I recently got myself a Dell 2408WFP 24″ display which can be set up vertically, for a full glory of 125 lines of code in VS, and this helped me reproduce the problem as reported by a customer. Only afterwards I found out he was using a widescreen monitor in vertical position too! I’m considering writing a story on this, as it’s interesting on its own right.

Codekana 1.1.7

Thursday, May 15th, 2008

I uploaded version 1.1.7 yesterday. This version fixes a freeze that would happen after searching for a regular expression that could result in a zero-width match, due to a bug in the search results highlighting feature. Visual Studio would completely stop responding after doing this, and the only option was killing it.

Here is the link to download it:

http://www.codekana.com/CodekanaVS-1.1.7.msi

This build will automatically update any previous installation, you don’t need to uninstall the previous version you have installed.

Web Site Revamp

Wednesday, April 30th, 2008

Unless you’re reading this with an RSS reader, you’ll notice that I’ve finally styled the blog to mostly match the rest of the codekana.com site. It still needs some work, but now that it is integrated with the rest I added a link to the blog on the top navigation bar. I also updated the copyright notice at the bottom to say “2008” (a bit late, I know).

Expect some more work on the web page over the next few weeks. Hopefully, I will be able to convey the usefulness of Codekana better with a better structuring of the pages.

Ah, the “really big plans” for Codekana are reserved for later this year, but I’m planning some new release with new features over the next couple of months!


Highlights

Codekana customers around the world
Read the buzz about Codekana!
Case studies:
save time and effort
The story behind Codekana
ViEmu: Codekana's companion product.
vi/vim emulation everywhere!
(pack discounts available)
 
(c) 2005-2017 Symnum Systems SL