planet

July 15, 2014

Ale Gadea

Month of June

July 15, 2014 01:43 AM UTC

Here goes a little summary of what I been doing between late june (9~21) and early july (1~11).

First and easy, I have been documenting Darcs.misplacedPatches (old name chooseOrder), D.P.W.Ordered and D.P.W.Sealed. Something to comment is that the semantics of misplacedPatches, not always can clean a tag doing darcs optimize reorder. For example; Suppose we have a repository, $r_1$ with the following patches;

$r_1$ $=$ $t_{1,0}$ $p_{1,0}$ $t_{1,1}$

here all tags are clean, but if we make another repository, say $r_2$, and we pull from $r_1$ of the
following way

$\$$ darcs pull -a -p $p_{1,0}$ $r_1$ (we want to pull the patch $p_{1,0}$, we assume that the name of the patch is $p_{1,0}$ for the matching with -p option)
$\$$ darcs pull -a $r_1$

so now we have,

$r_2$ $=$ $p_{1,0}$ $t_{1,0}$ $t_{1,1}$

and we see that $t_{1,0}$ is dirty. Doing darcs optimize reorder not reorder nothing. What is going on is that to know what reorder, misplacePatches takes the first tag, in our case $t_{1,1}$, and
"search" for what patches he don't tag. But $p_{1,0}$ and $t_{1,0}$ are tagged by $t_{1,1}$ so there is nothing to reorder despite $t_{1,0}$ is dirty. Therefore there is no way of clean $t_{1,0}$ because misplacePatches always takes the first tag, so if a tag is tagging one or more dirty tags, this tags never be available to get clean.

"Second", using the implementation of "reorder" one can get almost for free the option --reorder for the commands pull, apply and rebase pull. The behavior for the case of pull (for the others commands is the same basic idea) is that our local patches remain on top after a pull from a remote repository, e.i. suppose we have the followings $l$(ocal) and $r$(emote) repositories,

$l$ $=$ $p_1$ $p_2$ $\ldots$ $p_n$ $lp_{n+1}$ $\ldots$ $lp_m$

$r$ $=$ $p_1$ $p_2$ $\ldots$ $p_n$ $rp_{n+1}$ $\ldots$ $rp_k$

where $lp$ are the local patches that don't belong to $r$, and vice versa for $rp$. Make darcs pull, leaves $l$ as follow,

$l$ $=$ $p_1$ $p_2$ $\ldots$ $p_n$ $lp_{n+1}$ $\ldots$ $lp_m$ $rp_{n+1}$ $\ldots$ $rp_k$

meanwhile make darcs pull --reorder, leaves $l$,

$l$ $=$ $p_1$ $p_2$ $\ldots$ $p_n$ $rp_{n+1}$ $\ldots$ $rp_k$ $lp_{n+1}$ $\ldots$ $lp_m$

making more easy to send the $lp$ patches later.

"Third", beginning a new task, implement option minimize-context for command darcs send. Still no much to comment, I have almost finished implementing the option but with some doubts, I hope that for the end of the week have a more "prettier" implementation as well as a better understanding.

July 04, 2014

the Patch-Tag blog

Patch-tag is shutting down on August 4 2014. Please migrate repos to hub.darcs.net.

July 04, 2014 07:24 PM UTC

Patch-tag users:

I have made the decision to shut down patch tag.

I’ve taken this step because I have stopped developing on patch tag, the site has a material time and money cost, and the technical aspects that made it a valuable learning experience have decreased to the point that I have hit diminishing returns.

The suggested continuity path is to move repos to Simon Michaels’s excellent hub.darcs.net.

To end on a positive note, I would like to say: no regrets! Creating patch tag was a definite high point of my career, opened valuable doors, and engendered even more valuable partnerships and collaborations. To my users and everyone who has helped, you are awesome and it was a lot of fun seeing the repos come online.

I may write a more in depth post mortem at a later time, but for now I just wanted to make a public statement and nudge remaining users to take appropriate action.

If there is anybody that would like to take over patch tag operations to keep the site going, I am open to handing over the reins so don’t be shy. I floated this offer among some private channels in the darcs community a while back, and the response then was… not overwhelming. But maybe the public announcement will bring in some new blood.

Thanks for using patch tag.

Happy tagging,

Thomas Hartman


June 25, 2014

Darcs News

darcs news #104

June 25, 2014 04:59 AM UTC

News and discussions

  1. Google Summer of Code 2013 has begun! BSRK and José will post updates on their blogs:

Issues resolved (8)

issue2163 Radoslav Dorcik
issue2227 Ganesh Sittampalam
issue2248 Ganesh Sittampalam
issue2250 BSRK Aditya
issue2311 Sebastian Fischer
issue2312 Sebastian Fischer
issue2320 Jose Luis Neder
issue2321 Jose Luis Neder

Patches applied (20)

See darcs wiki entry for details.

darcs news #105

June 25, 2014 04:58 AM UTC

News and discussions

  1. This year's Google Summer of Code projects brought a lot of improvements to darcs and its ecosystem!
  2. Gian Piero Carrubba asked why adjacent hunks could not commute:
  3. We listed the changes that occurred between version 2.8.4 and the current development branch into a 2.10 release page:

Issues resolved (8)

issue346 Jose Luis Neder
issue1828 Guillaume Hoffmann
issue2181 Guillaume Hoffmann
issue2309 Owen Stephens
issue2313 Jose Luis Neder
issue2334 Guillaume Hoffmann
issue2343 Jose Luis Neder
issue2347 Guillaume Hoffmann

Patches applied (39)

See darcs wiki entry for details.

Darcs News #106

June 25, 2014 04:58 AM UTC

News and discussions

  1. Darcs is participating once again to the Google Summer of Code, through the umbrella organization Haskell.org. Deadline for student application is Friday 21st:
  2. It is now possible to donate stock to darcs through the Software Freedom Conservancy organization. Donations by Paypal, Flattr, checks and wire transfer are still possible:
  3. Dan Licata wrote a presentation about Darcs as a higher inductive type:
  4. Darcs now directly provides import and export commands with Git. This code was adapted from Petr Rockai's darcs-fastconvert, with some changes by Owen Stephen from his Summer of Code project "darcs-bridge":

Issues resolved (6)

issue642 Jose Luis Neder
issue2209 Jose Luis Neder
issue2319 Guillaume Hoffmann
issue2332 Guillaume Hoffmann
issue2335 Guillaume Hoffmann
issue2348 Ryan

Patches applied (34)

See darcs wiki entry for details.

Darcs News #107

June 25, 2014 04:57 AM UTC

News and discussions

  1. Darcs has received two grants from the Google Summer of Code program, as part of the umbrella organization Haskell.org. Alejandro Gadea will work on history reordering:
  2. Marcio Diaz will work on the cache system:
  3. Repository cloning to remote ssh hosts has been present for years as darcs put. This feature has now a more efficient implementation:

Issues resolved (11)

issue851 Dan Frumin
issue1066 Guillaume Hoffmann
issue1268 Guillaume Hoffmann
issue1416 Ale Gadea
issue1987 Marcio Diaz
issue2263 Ale Gadea
issue2345 Dan Frumin
issue2357 Dan Frumin
issue2365 Guillaume Hoffmann
issue2367 Guillaume Hoffmann
issue2379 Guillaume Hoffmann

Patches applied (41)

See darcs wiki entry for details.

Darcs News #108

June 25, 2014 04:57 AM UTC

News and discussions

  1. We have a few updates from the Google Summer of Code projects. Alejandro Gadea about history reordering:
  2. Marcio Diaz about the cache system:
  3. Incremental fast-export is now provided to ease maintenance of git mirrors:

Issues resolved (8)

issue2244 Ale Gadea
issue2314 Benjamin Franksen
issue2361 Ale Gadea
issue2364 Sergei Trofimovich
issue2364 Sergei Trofimovich
issue2388 Owen Stephens
issue2394 Guillaume Hoffmann
issue2396 Guillaume Hoffmann

Patches applied (39)

See darcs wiki entry for details.

June 12, 2014

Ale Gadea

Third Week (02-06 june)

June 12, 2014 04:58 PM UTC

Well, well... Now with the solution already implemented here are a couple of time tests that show the improvement.

For the repository of the issue2361:

Before patch1169
"let it run for 2 hours and it did not finish"

After patch1169
real    0m5.929s
user    0m5.683s
sys     0m0.260s

For the repository generated by forever.sh, that in summarize has 12600~ patches, a bundle unrevert and doing reorden implies move 1100~ patches forward passing by 11500~ patches.

Before patch1169
(Interrupted!)
real    73m9.894s
user    71m28.256s
sys     1m11.439s

After patch1169
real    2m23.405s
user    2m17.347s
sys     0m6.030s

The repository generated by bigRepo.sh has 600~ patches, with only one tag and a very small bundle unrevert.

Before patch1169
real        0m34.049s
user        0m33.386s
sys         0m0.665s

After patch1169
real        0m1.053s
user        0m0.960s
sys         0m0.152s

One last repository generated by bigUnrevert.sh, has 13 patches and a really big bundle unrevert (~10MB).

Before patch1169
real    0m1.304s
user    0m0.499s
sys     0m0.090s

After patch1169
real    0m0.075s
user    0m0.016s
sys     0m0.011s

The repository with more examples is in here: ExamplesRepos.

June 05, 2014

Ale Gadea

Second Week (26-30 may)

June 05, 2014 06:47 PM UTC

Luckily, this week with Guillaume we found a "solution" for the issue 2361. But before of entering in details, let's review how the command darcs optimize --reorder does for reorder the patches.

So, suppose we have the following repositories than, reading it from left to right we have the first patch till the last patch, besides with $p_{i,j}$ we denote the $i$-th patch who belongs to the $j$-th repository, and when we want to specify that a patch $p_{i,j}$ is a tag we write $t_{i,j}$.

$r_1$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{n+1,1}$ $\ldots$ $p_{m,1}$

$r_2$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{1,2}$ $\ldots$ $p_{k,2}$ $t_{1,2}$ $p_{k+1,2}$ $\ldots$ $p_{l,2}$

where the red part represent when $r_2$ was cloned from $r_1$, and the rest is how each repository was evolved. Now, suppose we make a merge of $r_1$ and $r_2$ in $r_1$ making a bundle of the patches of $r_2$ and appling it in $r_1$. Thus, after the merge we have that

$r_1$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{n+1,1}$ $\ldots$ $p_{m,1}$ $p_{1,2}$ $\ldots$ $p_{k,2}$ $t_{1,2}$ $p_{k+1,2}$ $\ldots$ $p_{l,2}$

and we found the situation where the tag $t_{1,2}$ is dirty because the green part is in the middle. And now we are in conditions of finding out how darcs does the reorder of patches.
So, the first task is to select the first tag seeing $r_1$ in the reverse way, suppose $t_{1,2}$ is the first (ie, $p_{k+1,2}$ $\ldots$ $p_{l,2}$ are not tags), and split the set of patches (the repository) in

$ps_{t_{1,2}}$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{1,2}$ $\ldots$ $p_{k,2}$ $t_{1,2}$

and the rest of the patch set,

$rest$ $=$ $p_{n+1,1}$ $\ldots$ $p_{m,1}$ $p_{k+1,2}$ $\ldots$ $p_{l,2}$

this is done by splitOnTag, which I don't totally understand yet, so for the moment... simply do the above :) Then, the part that interest us now is $rest$, we want to delete all the patches of $rest$ that exist in $r_1$ and then add them again, causing that they show up to the right. This job is done by tentativelyReplacePatches, which first calls tentativelyRemovePatches and then calls tentativelyAddPatches.

So, tentativelyRemovePatches of $r_1$ and $rest$ makes,

$r_{1}'$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{1,2}$ $\ldots$ $p_{k,2}$ $t_{1,2}$

and, tentativelyAddPatches of $r_{1}'$ and $rest$,

$r_{1}''$ $=$ $p_{1,1}$ $p_{2,1}$ $\ldots$ $p_{n,1}$ $p_{1,2}$ $\ldots$ $p_{k,2}$ $t_{1,2}$ $p_{n+1,1}$ $\ldots$ $p_{m,1}$  $p_{k+1,2}$ $\ldots$ $p_{l,2}$


leaving $t_{1,2}$ clean.

Well, all of this was for understanding the "solution" for the issue, we are almost there but before let's look at the function tentativelyRemovePatches. It attempts to remove patches with one special care: when one does darcs revert, a special file is generated, called unrevert in _darcs/patches, which is used for darcs unrevert in case that one makes a mistake with darcs revert. One important difference with unrevert is that unlike all the other files in _darcs/patches, unrevert in not a patch but a bundle, that contains a patch and a context. This context allows to know if the patch is applicable. So when one removes a patch (running for example oblitarete, unrecord or amend) that patch has to be removed from the bundle-revert (bundle of the file _darcs/patches/unrevert). It's now always possible to adjust the unrevert bundle, in which case, the operation continues only if the user agrees to delete the unrevert bundle.

But now a question emerge. Is it necessary to accommodate the bundle-revert in the case of reorder?; the answer is no, and it's because we don't delete any patch of $r_1$ so we still can apply the bundle-revert in $r_{1}''$.

So, finally! we find out that for reorder we need a special case of removing, which doesn't try to update the unrevert bundle. And this ends up being the "solution" for the issue, since the reorder blocks in that function. But! beyond this solves the issue something weird is happening, that is the reason of the double quotes for solution :)

This is more o less the step forward for now. The tasks ahead are, documenting the code in various parts and make the special case for the function tentativelyRemovePatches. On the way I will probably understand more about some of the functions that I mention before so probably I will add more info and rectify whatever is needed.

June 03, 2014

Ale Gadea

Google Summer of Code 2014 - Darcs

June 03, 2014 06:46 PM UTC

Hi hi all!

I have been accepted in the GSoC 2014 :) , as part of the work I'll be writing about my progress. The original plan is have a summary per week (or at least I hope so jeje).

I have already been reading some of the code of darcs and fixing some issues;

Issue 2263 ~ Patch 1126
Issue 1416 ~ Patch 1135
- Issue 2244 ~ Patch 1147 (needs-screening) (not any more $\ddot\smile$)

The details about the project is in History Reordering Performance and Features. Also some issues about the project are;

Issue 2361
Issue 2044

Cheers!

First Week (19-23 may)

June 03, 2014 06:42 PM UTC

Sadly, a first slow week, I lost the monday with problems with my notebook for which I have to reinstall ghc, cabal, all the libraries, etc.. but! in the end this helped :)

The list of taks of the week include:

1. Compile and run darcs with profiling flags
2. Write scripts to generate dirty-tagged big repositories
3. Check memory usage with hp2any for the command optimize --reorder for the
generated repositories and repo-issue2361
4. Check performance difference with and without patch-index
5. Document reorder implementation on wiki
6. Actually debug/optimize reorder of issue2361 (Stretch goal)

1. Compile and run darcs with prolfiling flags

This seems pretty easy at first, but turned somewhat annoying because one have to install all the libraries with the option profiling. So a mini-step-by-step of the my installation of darcs with profiling
flags is (i'm using ubuntu 14.04, ghc-7.6.3 and cabal-install-1.20.0.2) :

- Install ghc-prof package, in my case with sudo apt-get install ghc-prof
- Install depencencies of darcs with enable-library-profiling, doing:
    - $ cabal install LIB --enable-library-profiling ( for each library :) )
    - or setting in ~/.cabal/config, library-profiling: True
- Finaly install darcs with enable-library-profiling and enable-executable-profiling

2. Write scripts to generate dirty-tagged big repositories

About this no much to say, I did some libraries to make the scripts that generates the repositories more straightforward. And I wrote some examples, but still in search of interesting examples. A long the week probably I will add examples, hopefully interesting.

3, 4 and 5 all together and mixed

Now, when finally start to generate the examples repositories and play with hp2ps to check differents things, I started to think about others things and I ended up studing the implementation of the command optimize --reorder, in particular I start to write a version which print some info during the ordering of patches, but for now is very dirty implementation.

April 27, 2014

Marcio Diaz

GSoC Progress Report #1: Complete Repository Garbage Collection

April 27, 2014 05:06 AM UTC

In my first week I worked on completing the garbage collection for repositories.

Darcs stores all the information needed under _darcs directory. In this part of the project we are only interested in the files stored in three directories:

While working on a project under version control, these directories grow in size.
Every time we record a new patch:

So, why do we keep these files if we no longer need them? Well, that’s because darcs wants to be fast and does not delete these files over time. Also it’s because if the repository is public and someone is cloning it, you don’t want to have some files disappearing in the process. 

Darcs, using "darcs optimize" command, only knows how to clean up the _darcs/pristine.hashed directory. Until now, the only way to clean the other two directories was doing a "darcs get". With the changes introduced, now "darcs optimize" also clean these directories.

Algorithms:

The implemented algorithm was pretty straightforward, in pseudo-code:

- inventory = _darcs/hashed_inventory
- while (inventory) 
    - useful_inventories += inventory
    - inventory = next_inventory(inventory) 
- remove files not in useful_inventories.

- inventory = _darcs/hashed_inventory
- while (inventory) 
    - useful_patches += get_patches(inventory)    
    - inventory = next_inventory(inventory)
- remove files not in useful_patches.

We can see that we travel the inventory list twice, one for inventories and one for the patches. Although this is not optimal, I think it is more modular, since now we have a function that gets the list of patches.


Commands affected:

- darcs optimize

Use cases:

It is useful when you need to free memory on your hard disk. 
For example:
- Record a new patch.
- Unrecord the new patch.
- Run optimize for garbage collecting the unused files corresponding to the unrecorded patch. Details in: http://pastebin.com/vYHiYV0F
You can find more use cases in the regression test script:
http://hub.darcs.net/darcs/darcs-screened/browse/tests/issue1987.sh.

Issues solved:

http://bugs.darcs.net/issue1987.

Patches created:

http://bugs.darcs.net/patch1134.

April 26, 2014

Marcio Diaz

GSoC project accepted

April 26, 2014 09:36 PM UTC

I was accepted for the Google Summer of Code 2014. I'll be working for Haskell.org and my project will focus on improvements of Darcs version control system.

The project consists on several parts:

  1. Complete garbage collection for repositories.
  2. Bucketed global cache.
  3. Garbage collection of global cache.
  4. Investigate and implement darcs undo command.
  5. Investigate and implement darcs undelete command.
Here is a detailed description of my project proposal: http://darcs.net/GSoC/2014-Hashed-Files-And-Cache.

I'll try to give weekly updates of how my work is going, and let you know about the problems and solutions that I find in my way.

Thanks Haskell.org, thanks Darcs and last but not least thanks Google for giveng us this awesome opportunity.

November 03, 2013

Simon Michael

darcsum 1.3

November 03, 2013 07:38 PM UTC

darcsum was hanging again, so I made some updates:

And since I came this far, I’ll tag and announce darcsum 1.3. Hurrah!

This release includes many fixes from Dave Love and one from Simon Marlow. Here are the release notes.

Site and ELPA package updates will follow asap. All help is welcome.

September 26, 2013

Simon Michael

darcsden/darcs hub GSOC complete

September 26, 2013 11:48 AM UTC

Aditya BSRK’s darcsden-improvement GSOC has concluded, and I’ve recently merged almost all of the pending work and deployed it on darcs hub.

You can always see the recently landed changes here, but let me describe the latest features a little more:

File history - when you browse a file, there’s a new “file changes” button which shows just the changes affecting that file.

File annotate - there’s also a new “annotate” button, providing the standard view showing which commit last touched each line of the file. (also known as the blame/praise feature). It needs some CSS polish but I’m glad that the basic side-by-side layout is there.

More reliable highlighting while editing - the file editor was failing to highlight many common programming languages - this should be working better now. (Note highlighting while viewing and highlighting while editing are independent and probably use different colour schemes, this is a known open wishlist item.)

Repository compare - when viewing a repo’s branches, there’s a new “compare” button which lets you compare (and merge from) any two public repos on darcs hub, showing the unique patches on each side.

Cosmetic fixes - various minor layout and rendering issues were fixed. One point of discussion was whether to use the two-sided layout on the repo branches page as well. Since there wasn’t time to make that really usable I vetoed it in favour of the less confusing one-sided layout. I think showing both sides works well on the compare page though.

Patch bundle support - the last big feature of the GSOC was patch bundles. This is an alternative to the fork repo/request merge workflow, intended to be more lightweight and easy for casual contributors. There are two parts. First, darcs hub issue trackers can now store darcs patch bundle files (one per issue I think). This means patches can be uploaded to an issue, much like the current Darcs issue/patch tracker. But you can also browse and merge patches directly from a bundle, just as you can from another repo.

The second part (not yet deployed) is support for a previously unused feature built in to the darcs send command, which can post patches directly to a url instead of emailing them. The idea (championed by Aditya and Ganesh) is to make it very easy for someone to darcs send patches upstream to the project’s issue tracker, without having to fork a repo, or even create an account on darcs hub. As you can imagine, some safeguards are important to avoid becoming a spam vector or long-term maintenance headache, but the required change(s) are small and I hope we’ll have this piece working soon. It should be interesting to have both workflows available and see which works where.

I won’t recap the older new features, except to say that pack support is in need of more testing. If you ever find darcs get to be slow, perhaps you’d like to help test and troubleshoot packs, since they can potentially make this much faster. Also there are a number of low-hanging UI improvements we can make, and more (relatively easy) bugs keep landing in the darcs hub/darcsden issue tracker. It’s a great time to hack on darcs hub/darcsden and every day make it a little more fun and efficient to work with.

I really appreciate Aditya’s work, and that of his mentor, Ganesh Sittampalam. We did a lot of code review which was not always easy across a large time zone gap, but I think the results were good. Congratulations Aditya on completing the GSOC and delivering many useful features, which we can put to good use immediately. Thanks!

September 20, 2013

Jose Luis Neder

Automatic detection of replaces for Darcs - Part 1

September 20, 2013 03:25 PM UTC

In the last post i show some examples and use cases of the "--look-for-replaces" flag for whatsnew, record, and amend-record commands in Darcs. When used, this flag provides automatic detection of replaces(possible ones), even when the modified files shows more differences than only the replaces, and even shows possible "forced" replaces.
The simplest case is when you made a replace in you editor in of choice and don't do any other change to the file and then, after check all is ok, remember that you could have used a replace patch.

file before:
line1 foo
line2 foo
line3 foo
file after:
line1 bar
line2 bar
line3 bar
Then, instead of:
> darcs revert -a file
Reverting changes in "file":

Finished reverting.
> darcs replace foo bar file
> darcs record -m "replace foo bar"
replace ./file [A-Za-z_0-9] foo bar
Shall I record this change? (1/1) [ynW...], or ? for more options: y
Do you want to record these changes? [Yglqk...], or ? for more options: y
Finished recording patch 'replace foo bar'
You could do:
> darcs record --look-for-replaces -m "replace foo bar"
replace ./file [A-Za-z_0-9] foo bar
Shall I record this change? (1/1) [ynW...], or ? for more options: y
Do you want to record these changes? [Yglqk...], or ? for more options: y
Finished recording patch 'replace foo bar'
But it doesn't have to be a full replace. For instance, if you don't want to change a pair replaces, when you try to detect the changes instead of:
file before:
line1 foo
line2 foo
line3 foo
line4 foo
file after:
line1 bar
line2 bar
line3 bar
line4 foo
Then, instead of:
> darcs whatsnew
hunk ./file 1
-line1 foo
-line2 foo
-line3 foo
+line1 bar
+line2 bar
+line3 bar
With the new flag you could record this:
> darcs whatsnew --look-for-replaces
replace ./file [A-Za-z_0-9] foo bar
hunk ./file 4
-line4 bar
+line4 foo
Say you replace a word for another word that was already in the file. Normally this would mean that you should use "darcs replace --force". The look-for-replaces flag always "forces" the replaces, so if you try this, the changes to make the replace reversible will be shown before the replace patch:
file before:
line1 foo
line2 foo
line3 foo
line4 bar
file after:
line1 bar
line2 bar
line3 bar
line4 bar
With the new flag you will see the same patches like if you have made a "darcs replace --force foo bar file":
> darcs whatsnew --look-for-replaces
hunk ./file 4
-line4 bar
+line4 foo
replace ./file [A-Za-z_0-9] foo bar
Given certain limitations you could have any number of replaces detected, like this:
file before:
foo foo2 foo3
fee fee2 fee3
file after:
bar bar2 bar3
bor bor2 bor3
All the replaces are shown below:
> darcs whatsnew --look-for-replaces
replace ./file [A-Za-z_0-9] fee bor
replace ./file [A-Za-z_0-9] fee2 bor2
replace ./file [A-Za-z_0-9] fee3 bor3
replace ./file [A-Za-z_0-9] foo bar
replace ./file [A-Za-z_0-9] foo2 bar2
replace ./file [A-Za-z_0-9] foo3 bar3
If you want to know more about the limitations of this functionality, check Automatic detection of replaces for Darcs - Part 2.

Automatic detection of replaces for Darcs - Part 2

September 20, 2013 09:08 AM UTC

The last weeks i was implementing "--look-for-replaces" flag for whatsnew, record, and amend-record commands in Darcs. When used, this flag provides automatic detection of replaces(possible ones) even when the modified files shows more differences than only the replaces, given they meet the following prerequisites:
1. For a given "word" and a given file, there is not need for all the instances to be replaced, but there must be only one replace suggestion posible. i.e.:

this is ok:
file before:
foo
foo
foo
file after:
foo
bar
bar
this is not detected:
file before:
foo
foo
foo
file after:
foo
bar
bar2
2. The replace must happen in lines that have the same amount of words between the recorded and the working state, otherwise it would not be detected.
this is ok:
file before:
foo
foo
foo
file after:
foo roo
bar fee
bar
this is not detected(i don't know which is to detect anyway):
file before:
figaro foo
figaro foo
figaro foo
file after:
figaro foo
figaro bar bee
figaro foo bar
3. There must be at least one hunk with the same amount of lines in the - and + side that contains the replace.
this is not detected:
file before:
line1 foo
line2 foo
line3 foo
file after:
line1 bar
line2or3 bar
It would not detect this replace, even if it is a "perfect" replace, because it does not have the same number of lines, and is not trivial to tell which line is the one "modified" and which one is the one "deleted".

For more details about the implementation you could look on the look-for-replaces wiki page

Automatic detection of file renames for Darcs - Part 2

September 20, 2013 09:07 AM UTC

In the last few weeks i was refining the automatic detection of file renames implementation adding support for windows, and support for more complicated renames.

Now if you like you can consult the inode information saved in the index at any time with "darcs show index":
⮁ darcs init
⮁ mkdir testdir
⮁ touch testfile
⮁ darcs record -al -m "test files"
Finished recording patch 'test files'
⮁ ls -i1d . testdir testfile
2285722 .
2326707 testdir
2238437 testfile

⮁ darcs show index
07ec6ccf873cf215ac0789a420f154ba9218b7ca5c4fce432584edab49766a7c 2285722 ./
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2326707 testdir/
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2238437 testfile
Now with the new dependency algorithm, you can make more complicated renames, like exchange of filenames, folder moves. The algorithm don't manage exchange of filenames inside of a folder that have been renamed exchanging names, anything else is managed fine.
For example:
⮁ ls -1pC
_darcs/  dir/  dir2/  dir3/  foo  foo2  foo3  foo4  foo5
⮁ mv foo dir3
⮁ mv foo2 dir
⮁ mv foo3 dir2
⮁ mv foo4 foo4.tmp
⮁ mv foo5 foo4
⮁ mv foo4.tmp foo5
⮁ mv dir3 dir
⮁ mv dir dir2/dir2
⮁ mv dir2 dir
⮁ darcs whatsnew --look-for-moves
move ./dir ./dir2/dir2
move ./dir2 ./dir
move ./dir3 ./dir/dir2/dir3
move ./foo ./dir/dir2/dir3/foo3
move ./foo2 ./dir/dir2/foo2
move ./foo3 ./dir/foo3
move ./foo4 ./foo4.tmp~
move ./foo5 ./foo4
move ./foo4.tmp~ ./foo5
The moves shown by "darcs whatsnew --look-for-moves" are not exactly the ones made but yield the same final result.

August 14, 2013

Jose Luis Neder

Automatic detection of file renames for Darcs

August 14, 2013 04:29 AM UTC

In the last few weeks i was implementing automatic detection of file renames adding "look-for-moves" flag to the amend-record, record, and whatsnew commands.

In darcs are 3 states:


 If a file rename is not marked in the pending state, darcs lost track of the file and can't know where it is, and then `darcs whatsnew` and `darcs record` will indicate the file as deleted.
To detect this file rename I choose to use the inode info in the filesystem to check for equality between different filenames in the recorded and working state of the repo. for those who don't know, the inode is an index number assigned by the file system to identify a specific file data. The file name is linked to the data by this number, and it's used by directories as well. You can consult this number with "ls -i".
⮁ mkdir testdir
⮁ touch testfile
⮁ ln testfile testfile.hardlink
⮁ ln -s testfile testfile.symboliclink
⮁ ls -i1
10567718 testdir
10485776 testfile
10485776 testfile.hardlink
10485767 testfile.symboliclink 
You can see that the hardlink shares the same number with the test file, this is because a file is essentially a hardlink to the file data and when you make a new hardlink you are sharing the same file data, so the same inode number.
To have an old inode to filename mapping, there must be some record of the files inodes in some place, so I added the inode info to the index of hashed-storage in _darcs/index. The index save the last info about the record plus the pending state, sort of, so is a perfect fit to save this info.
Then comparing the RecordedAndPending Tree(from the index) with the Working Tree i get the file changes in a pair list mapping between the two states. With this list I resolve dependencies between the different moves, making temporal names if it's necessary and generating a FL list of move patches to merge with the changes between pending and working patches.
This patches are shown in with whatsnew or are selected with record/amend-record to be recorded in the repo.
There is a little more to make this happen but that's the core idea of the implementation.
The algorithm doesn't care if the file are modified or not, because it doesn't care of the content of the files, so it's very robust in that sense.
With this implementation you could do any move directly with "mv", and is very lightweight and fast in detecting moves so is likely a good decision make "--look-for-moves" a default flag. You could do things like this:
⮁ darcs init
Repository initialized.
touch foo
darcs record -a -m add_file_foo -A x --look-for-adds
Finished recording patch 'add_file_foo'
mv foo foo2
darcs whatsnew --look-for-moves
move ./foo ./foo2
This doesn't work on Windows yet, because fileID(the function on unix-compat that get the inode number) is lacking an implementation on windows. I know the windows API have GetFileInformationByHandle (it returns a BY_HANDLE_FILE_INFORMATION structure that contains the file index number[1]), so there doesn't have to be too hard to add an implementation of it with some boilerplate code to make the interface.
More complicated moves should work and some does but I was having problems with the dependency resolving algorithm implementation. I made some mistakes in the first implementation and I'm dragging them since then. I'm confident to know what is the error so I will fix it soon.
UPDATE: i'm testing a windows implementation with the Win32 haskell library on a virtual machine.

August 09, 2013

Simon Michael

darcs hub, hledger, game dev

August 09, 2013 10:01 AM UTC

Hello blog. Since last time I’ve been doing plenty of stuff, but not telling you about it. Let’s do a bullet list and move on..

darcsden/darcs hub

hledger

FunGEn & game dev

A sudden burst of activity here.

July 24, 2013

Simon Michael

darcs hub repo stats, hledger balance sheet

July 24, 2013 02:50 AM UTC

Recent activity:

I fixed another clumsy query on darcs hub, making the all repos page faster. Experimented with user and repo counts on the front page. I like it, but haven’t deployed to production yet. It costs about a quarter of a second in page load time (one 50ms couch query to fetch all repos, plus my probably-suboptimal filtering and sorting).

I’ve finally learned how many of those names on the front page have (public) repos behind them (144 out of 319), and how many private repos there are (125, higher than expected!).

Thinking about what is really most useful to have on the front page. Keep listing everything ? Just top 5 in various categories ? Ideas welcome.

Did a bunch of bookkeeping today, which inspired my first hledger commit in a while. I found the balancesheet command (abbreviation: bs) highly useful for a quick snapshot of assets and liabilities to various depths (add –depth N). The Equity section was just a distraction though, and I think it will be to most hledger users for the time being, so I removed it.

July 23, 2013

Simon Michael

hub hacking

July 23, 2013 12:30 AM UTC

More darcs hub activity, including some actual app development (yay):

Added news links to the front page.

Cleaned up hub’s docs repo and updated the list of blockers on the roadmap.

Updated/closed a number of issues, including the app-restarting #58, thanks to a fast highlighting-kate fix by John McFarlane.

Tested and configured the issue-closing commit posthook in the darcsden trunk repo. Commits pushed/merged there whose message contains the regex (closes #([0-9]+)|resolves #([0-9]+)|fixes #([0-9]+)) will now close the specified issue, with luck.

Consolidated a number of modules to help with code navigation, to be pushed soon.

Improved the redirect destination when deleting or forking repos or creating/commenting/closing issues.

Fixed a silly whitespace issue when viewing a patch, where the author name and date run together. I’m still confused about the specific code that generates this - the code I expect uses tables but firebug shows divs. A mystery for another day..

July 22, 2013

Simon Michael

hub speedups

July 22, 2013 12:30 AM UTC

More darcs hub hacking today.

July 21, 2013

Simon Michael

darcsden 1.1, darcs hub news

July 21, 2013 03:00 PM UTC

I’ve been hacking (mostly on darcsden/hub) but not blogging recently. Must get back to the old 45-15 minute routine.


darcsden 1.1 released

darcsden 1.1 is now available on hackage! This is the updated version of darcsden which runs hub.darcs.net, so these changes are also relevant to that site’s users. (More darcs hub news below.)

darcsden is a web application for browsing and managing darcs repositories, issues, and users, plus a basic SSH server which lets users push changes without a system login. It is released under the BSD license. You can use it:

http://hackage.haskell.org/package/darcsden - cabal package
http://hub.darcs.net/simon/darcsden - source
http://hub.darcs.net/simon/darcsden/issues - bug tracker

Release notes for 1.1

Fixed:

New:

Brand new, from the Enhancing Darcsden GSOC (some WIP):

Detailed change log: http://hub.darcs.net/simon/darcsden/CHANGES.md

How to help

darcsden is a small, clean codebase that is fun to hack on. Discussion takes place on the #darcs IRC channel, and useful changes will quickly be deployed at hub.darcs.net, providing a tight dogfooding/feedback loop. Here’s how to contribute a patch there:

  1. register at hub.darcs.net
  2. add your ssh key in settings so you can push
  3. fork your own branch: http://hub.darcs.net/simon/darcsden , fork
  4. copy to your machine: darcs get http://hub.darcs.net/yourname/darcsden
  5. make changes, darcs record
  6. push to hub: darcs push yourname@hub.darcs.net:darcsden --set-default
  7. your change will appear at http://hub.darcs.net/simon/darcsden/patches
  8. discuss on #darcs, or ping me (sm, simon@joyful.com) to merge it

Credits

Alex Suraci created darcsden. Simon Michael led this release, which includes contributions from Alp Mestanogullari, Jeffrey Chu, Ganesh Sittampalam, and BSRK Aditya (sponsored by Google’s Summer of Code). And last time I forgot to mention two 1.0 contributors: Bertram Felgenhauer and Alex Suraci.

darcsden depends on Darcs, Snap, GHC, and other fine projects from the Haskell ecosystem, as well as Twitter Bootstrap, JQuery, and many more.


darcs hub news 2013/07

http://hub.darcs.net , aka darcs hub, is the darcs repository hosting site I operate. It’s like a mini github, but using darcs. You can:

The site was announced on 2012/9/15 (http://thread.gmane.org/gmane.comp.version-control.darcs.user/26556). Since then:

Please try it out, report problems, and contribute patches to make it better.


July 20, 2013

Jose Luis Neder

Patience diff algorithm benefits for darcs

July 20, 2013 10:37 PM UTC

In this post i am going to explain the benefits of Bram Cohen's patience diff algorithm for darcs but first we have to understand how this algorithm works. There are great posts on the web that explain it really well so instead of trying to explain it again i'm going to quote the important things i need to remark the benefits it have for darcs and haskell-like non-curly languages by examples.

A brief summary of what the patience diff algorithm does from Bram Cohen's Blog:

    1. Match the first lines of both if they're identical, then match the second, third, etc. until a pair doesn't match.
    2. Match the last lines of both if they're identical, then match the next to last, second to last, etc. until a pair doesn't match.
    3. Find all lines which occur exactly once on both sides, then do longest common subsequence on those lines, matching them up.
    4. Do steps 1-2 on each section between matched lines
    From Alfedenzo's Blog:
    The common diff algorithm is based on the longest common subsequence problem. Given (in this case) two documents, finding all lines that occur in both, in the same order. That is, making a third document such that every line in the document appears in both of the original documents, and in the same order. Once you have the longest common subsequence, all that remains is to describe the differences between each document and the common document, a much easier problem since the common document is a subset of the other documents.
    While the diffs generated by this method are efficient, they tend not to be as human readable.
    Patience Diff also relies on the longest common subsequence problem, but takes a different approach. First, it only considers lines that are (a) common to both files, and (b) appear only once in each file. This means that most lines containing a single brace or a new line are ignored, but distinctive lines like a function declaration are retained. Computing the longest common subsequence of the unique elements of both documents leads to a skeleton of common points that almost definitely correspond to each other. The algorithm then sweeps up all contiguous blocks of common lines found in this way, and recurses on those parts that were left out, in the hopes that in this smaller context, some of the lines that were ignored earlier for being non-unique are found to be unique. Once this process is finished, we are left with a common subsequence that more closely corresponds to what humans would identify.

    Then when you modify something that is between unique lines like this:
    you get this two different patches depending which algorithm is used:
    Patience diff:
    Myers diff:
    In this case, you have one hunk instead of three in the case of unrelated functions, but in the case of doSomething, you still have a separate hunk because of the unique line in common.
    Normally the Myers diff should perform bad when some lines are only moved from one place to another like in this case, and i'm glad to say that with the fine tuned myers implementation of darcs this doesn't happen. But it still happens in curly-braced languages, like in this case(from here):
    You get this two different diffs:
    Patience diff:
    Myers Diff:
    In theory this could happen with not curly braces if there are non-unique equal lines in a file like this:
    Patience Diff:
    Myers Diff:
    Here you can see that the hunks offered by the patience diff algorithm are more useful and understandable. But this example depends in equal lines hardly found in real cases especially in haskell when the whitespaces aren't necessarily the same between functions as in languages ​​like python.
    Usually i would say is better to have smaller hunks that are isolated between functions, because it should avoid dependencies between patches, but then there are more changes to select/unselect and some times it depends on what you think is best to avoid conflicts between patchs. That is why you get the choice to use one algorithm or the other.
    You should take in consideration how the algorithm is used.
    When you use a command with a diff algorithm flag, the algorithm is used always to calculate the hunks of the actual unrecorded changes. The commands that make this are record, apply, mark-conflicts, pull, unpull, obliterate, revert, unrevert and rebase(suspend, unsuspend, reify, obliterate, inject and pull) . The flag don't change an already saved patch, like one sended or one pushed. Thereby patches to be applied or pulled, are not modified by the diff flag.
    When you use the record command, the patch saved depends on the diff algorithm and the hunks manually chosen. The patch is saved in hunks  so when you resolve conflicts between patchs this saved hunks are used.
    In the case of unrevert, you should take into account that the patch saved by revert is not affected by the unrevert's diff flag. You can only get a different patch is you use the flag when you make the revert i.e. "darcs revert --patience".

    July 13, 2013

    Simon Michael

    darcsden cleanup

    July 13, 2013 12:31 AM UTC

    Back to the dev diary. Last post was 11 days ago, after a two-week opening streak of daily posts. I got blocked on one, then got busy. Press on.

    Yesterday I started looking at BSRK Aditya’s GSOC darcsden enhancements, to review and hopefully deploy on darcs hub. So far he has worked on alternate login methods (github/google), password reminder, and darcs pack support (for faster gets).

    This is forcing some darcsden cleanup, my first darcsden work in a while aside from routine ops and support tasks. I’m going to release what’s in trunk as 1.1, and then start assimilating the new work by BSRK, Ganesh Sittampalam and anyone else who feels like chipping in. Started putting together release notes and a hub status update.

    The support requests seem to be on the rise - more usage ? I also found a good bug today: viewing a certain 1K troff file causes darcs hub’s memory footprint to blow up to 1.5G :)

    It would be great to have more functionality (like highlighting) broken out into separate, expendable worker processes, erlang style.

    July 12, 2013

    Jose Luis Neder

    Integrating patience diff and some benchmarks

    July 12, 2013 04:22 PM UTC

    The past two weeks i was integrating the patience diff algorithm adding an option flag so it can be selected from the command line. This was not as easy as it seems because the diff algorithm was deeply integrated in the codebase and adding a way to select between the two implementation would mean changing more than changing little bits of code here and there. But its done. There are a few rough edges here and there but is ready to be used.
    Now you can use "--patience" to use the alternative patience diff algorithm. There should be relatively easy to add more algorithms now.

    Regarding the performance, after running a variety of benchmarks with both algorithms in the end I really couldn't get conclusive results. Depending of the input one algorithm could perform better that the other, but only by a shallow margin. This are some of the runs i got(ShellTestsBenchmark, PureCriterionBenchmark).

    You can see the code in darcs-patience and darcs-patience-benchmarks
     repositories.

    This week I also was researching the best way to implement --look-for-moves flag. I came to the conclusion that the best trade-off is to extend the functionality in hashed-storage adding the inode of the files and then making a changes in some of the darcs commands to maintain and support this new info. I will be using unix-compat package. The functionality is not yet fully implemented for Windows but it may be a solution to that with this. Unfortunately I will have to move to a windows machine to see how this could be implemented. There are some corner cases to consider as well.

    You are going to be able to see my progress in darcs-look-for-moves and hashed-storage-look-for-moves repositories in the next weeks.

    July 07, 2013

    Simon Michael

    darcsden db thoughts

    July 07, 2013 11:00 PM UTC

    Spent about half of yesterday setting up Aditya’s darcsden patches on the dev instance of hub.darcs.net, testing them, and exploring db migration issues.

    Following BSRK’s instructions, I got the dev instance authenticating via Google’s OAuth servers. Good progress. The UI flow I saw needs a bit more work - eg logging in with google seemed to want me to register a new account. Or, there may be a problem with my setup at Google (wrong callback urls ?) - will have to review it with BSRK.

    Schema, migrations

    My dev instance has so far been using the same database as the live production instance. This is partly because I don’t yet know how to run a second CouchDB instance, partly to reduce complexity, partly to be able to compare old and new code with the same realistic data set.

    This of course can lead to trouble, if old and new code require different schemas. darcsden uses CouchDB, a “schemaless” database, but of course there is an implicit schema required by the application code, even if couch doesn’t enforce one. I got more clarity on this when I noticed my dev instance experiments causing errors on the production app.

    New darcsden code may include changes to the (implicit) db schema. In this case, there’s a change to the user’s password field. I need to notice such schema changes, and if I want to exercise them on the dev instance, I should first also install them on the production instance. Or, use a separate couchdb instance. Or, use separate databases in the couchdb instance. Or possibly, use separate views in the couchdb databases ?

    Eg, here BSRK made the code nicely read user documents (db records) with the old or new schema. Before testing it on the shared db I should have deployed that patch to production as well as dev.

    Looking ahead, is this approach (including code to deal with all old schemas) the best way to handle this ? Maybe. It makes things work and seems convenient, at least for now. But it also reminds me of years working with Zope’s ZODB (a schemaless python object database) and the layers of on-the-fly schema updating that built up, and the uncounted number of runtime bugs hunted down due to schema variations in individual objects.

    Schema-less or schema-ful ?

    While recovering from this, I learned some more about managing couchdb, schema migration, and current couchdb alternatives.

    Couch has some really good and unusual qualities, and I feel I’m only scratching the surface of it’s power. Even so, I’m starting to feel a schema-ful, relational database is a better fit for darcsden/darcs hub. Replacing couch has been a topic of discussion on #darcs for some time, for other reasons. Here are some reasons to replace it:

    Some reasons not to:

    July 01, 2013

    Simon Michael

    June review

    July 01, 2013 11:00 PM UTC

    The beginning of a new month. Here’s a quick update.

    No hledger release today as there isn’t much new to ship, following a month with several bugfix releases and otherwise mostly infrastructural work (build and dev tool fixes, wiki styling, site update hook). 8/1 is the likely next release date. Oh, John and I also had a nice voice chat - nice to escape the IRC window isn’t it - reviewing our glorious *ledger plans, and I happily accepted his first hledger patch - thanks John! :)

    My free hacking time in recent weeks went more towards darcs:

    Jose Luis Neder

    Patience Diff for Darcs

    July 01, 2013 02:37 AM UTC

    As you know, my GSoC project consist in enhancing record command for Darcs.

    As part of this, the last two weeks I was implementing the patience diff algorithm for patches and testing it for correctness, performance and usefulness.

    First, after identifying where I had to add the code, I proceeded to see if there was any implementation that could use (reuse code, don't reinvent the wheel). To my joy, David Roundy implemented it in his RCS Iolaus and it was almost a direct replacement.
    Before doing this I was working in a stack overflow issue (issue2313, patch1076) and i found that big files and diffs cause a stack overflow in this implementation too. After profiling i reduce the problem to the function "byparagraph". It wasn't tail-recursive so after convert it the problem was away and i happy to say it outperform the old diff implementation in this use cases.
    For testing the performance i used two tools: criterion and GHC time and space profiling system. The result shows that there isn't any difference in day to day use, performance wise. It particularly perform really bad with files that have many lines equal. I don't think this is really a problem because is very rare(but i'm looking into it) . The old algorithm use more memory but i couldn't make use of all my 4gb of ram as much as i try it, so i can say this is not an issue.
    For testing usefulness i make some examples(based on this posts [1][2][3]). It seems to happen that the two different algorithms return actually quite similar patches(if not exactly the same patch) in non-curly braced languages like haskell. On the other hand, in curly-braced languages patience diff is much better.

    All in all, i could say that there is several advantages to make the switch. I will be updating my findings about the performance and usefulness in next posts.
    You can review the code in http://hub.darcs.net/jlneder/darcs-patience.
    In http://hub.darcs.net/jlneder/darcs-patience-tests there is code to compare the two algorithm implementations.