I've used mercurial for a while – simply for the ease in building repositories and setting up servers. It's a wonderful thing to not need to worry about extracting sources and creating patch files anymore. Other than building temporary working repositories, I've used DVCS almost identically to other central solutions. Indeed, one of the major gripes I hear when people adopt mercurial or git is that "it works just like svn, only there are extra steps". It's the extra step that adds value.
We've all been there before – someone makes a commit, landing on the build server with a dull thud, killing the 'days-since-last-broken' timer. Taking a look at the commit, everyone immediately asks – "Did you even bother to compile it?" And then, commence whatever form of hazing the team uses to punish such poor behavior.
Obviously broken commits come down to a handful of faults: sloppy "small changes, missing files, and bad merges. There's not much we can do with sloppy work, but the last 2 hit even the best software developers hard.
With subversion and friends, a good programmer will exercise a standard workflow of:
CODE -> UPDATE -> COMMIT
Generally the "update" part is where things go badly. Either the programmer will not bother to update and test, checking only for compilation, or the "update" will end up potentially leading down a rabbit hole of work, especially if multiple people are working on the same code.
Many teams adopt the same idea when moving to mercurial or git, focusing on the idea that you don't want to see large numbers of 'branches' in the end code. I understand the validity of that point of view, especially for purposes of code review / auditing history. Even then, however, the DVCS work flow SHOULD change.
Let's look at the DVCS 'tweak' of the old workflow, for those that care update minimalist history:
CODE -> COMMIT -> PULL -> REBASE ->PUSH
Both mercurial and git offer rebase options. I believe mercurial outshines git here though – the mq extension provides an amazing amount of flexibility to walk up and down commit chains, with built in 'work flows' to make sure you don't get out of sync with the external repository. Importantly, however, not that we are doing our work, saving a snapshot, and then pulling and updating.
For those that don't care about the minimal history:
CODE -> COMMIT -> PULL -> MERGE -> PUSH
Here, the original 'working' sample that the developer tested and worked against is forever frozen in history. This comes to the core "plus" side of using DVCS -> we have a reproducible state of the repository where the developer has performed their work.
Now, this depends on us loosing the reigns a little bit from our centralized work flows. The critical element is realizing that in either git or mercurial, you are working through full repository snapshots, as opposed to changes.
With the MERGE model, we are showing the world the state of our local machine as exactly as possible, when we ran through and tested our code. Further, we can see the interaction between our changes and another, even if that other didn't result in merge conflicts.
It's very subtle, but the difference between 'snapshot' management and 'change' management is huge. Leveraging that difference will keep he "oops bad merge" and "ops I missed a file" build breakages to a minimum.