New Posts New Posts RSS Feed: A couple of questions on Concurrency
  FAQ FAQ  Forum Search   Calendar   Register Register  Login Login

A couple of questions on Concurrency

 Post Reply Post Reply
Author
Customer View Drop Down
Senior Member
Senior Member
Avatar
User Submitted Questions to Support

Joined: 30-May-2007
Location: United States
Posts: 260
Post Options Post Options   Quote Customer Quote  Post ReplyReply Direct Link To This Post Topic: A couple of questions on Concurrency
    Posted: 20-Jul-2007 at 2:56pm

Two questions which I hope will not take too much of your time:

 

1.       (To see if I have understood the n-tier implementation): Is a BOS effectively a remoting server for a client? If so, does one BOS serve multiple clients?  Yes and Yes 

2.       We invented a “general” optimistic locking model which we hoped to be able to implement (over time) for all of our data access models in our many and varied products.  From what I’ve read and understood so far (which is more than last week but still not enough) I’m not sure if we can implement it. I like our model a lot as it both simple and yet gives great flexibility for optimistic update conflict detection.

The essence of the model is the signature of the Update() method, which is
entity.Update(unchangedList, modifiedList);

The rules are:
unchangedList specifies a list of (fieldname,fieldvalue) pairs where the field value (in the database) must not have changed since the original data fetch.
modifiedList is a list of (fieldname, newvalue) pairs which contain the new field values.

 

The lists are completely unrelated (but must not overlap!).  In (almost) all cases, the unchanged list must include the primary key (PK) values.

Examples:

i)                    If the unchangedList contains only the PK value(s), then what you have is “last writer wins”.

ii)                   If the unchangedList contains the PK plus (say) a timestamp field, you have a “typical” optimistic locking implementation

iii)                 By specifying explicit fields in this list, you can be much more subtle and accommodating for multiple users updating different parts of a record during the same time period.

In particular, whether or not you care about overwriting a changed value is up to the business logic in this model.  Additionally, the model accommodates existing tables without the need to add new fields explicitly for optimistic support.

 

I believe that the Mapping tool has a place where you can specify a single field that represents whether a record has been updated? That would correspond to example 2 above, but fixed with regard to number of fields(1) and field choice, at design time?

We can figure out later how to implement #2 (or ask later if we can’t stretch that far!).  The main question right now is,  do we have the capability to do it within the DevForce architecture?
 

Back to Top
IdeaBlade View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 30-May-2007
Location: United States
Posts: 353
Post Options Post Options   Quote IdeaBlade Quote  Post ReplyReply Direct Link To This Post Posted: 20-Jul-2007 at 2:58pm

DF optimistic concurrency depends upon the existence of a single data column that serves as the concurrency indicator. It can be any type; the only requirement is that it change every time the row is updated (obviously the range of possible values must be large enough; a bit field would not do :-)

 

You can turn concurrency off for any business object type in which case "last writer wins" for saves of that type.

 

Thus we have coverage for (i) and (ii), albeit by different means.

 

====

 

We deliberately do not support (iii) as we feel is is unsafe and we can not think of a benefit that adequately compensates for the loss of safety.

 

What is unsafe?

 

Business object validation typically involves comparisons of multiple properties of a business object and often of properties of related objects. I'll stick to the cross-property, same object case.

 

If I perform a test that involves column "A", I'm expecting the value of "A" to be the one in the object I am about to update. Suppose that value is trumped by a different value from a different user who managed to save the object while I wasn't looking. I will not detect this difference. I will think the object is valid. As it happens, my object would be invalid if column "A" has the value now in the database.

 

Thus I cannot be certain that my object is valid when I save it. The (iii) feature would undermine my object integrity tests. I'd probably never know it happened (although I suppose I could RE-VALIDATE IT when the DF returned it from save ... but who is going to do that?).

 

Now suppose I proceed as we do in DevForce [(ii)]. 

 

I detect a concurrency problem. I compare the object I tried to write with the object as it is in the database. I see that column "A" was changed. I decide to blend in the value from the database and try to save again. BUT THIS TIME, I re-validate the object before save (as I always do); if it is ok, the save continues; if it is not ok, we stop right there.

 

This is surely much safer than (iii). I can achieve the same effect as (iii) - that is, I can blend this user's changes with changes made by someone else - but I do so under validation control. There is no extra programming cost or complexity - approach (iii) and the one I describe both involve blending values from different user inputs - but the programming takes place in the business model where I can see it ... not some code in a distant module.

 

There is a slight performance impact - I have to update twice. But we're talking about optimistic concurrency, yes? Collisions are supposed to be rare. If they are NOT rare, then I would recommend a more pessimistic strategy such as soft locking. 

Back to Top
Customer View Drop Down
Senior Member
Senior Member
Avatar
User Submitted Questions to Support

Joined: 30-May-2007
Location: United States
Posts: 260
Post Options Post Options   Quote Customer Quote  Post ReplyReply Direct Link To This Post Posted: 20-Jul-2007 at 3:07pm
 Thanks for the response.  How could we have so many problems that you point out? This made me think more carefully about the details of what we are requiring to happen underneath the interface I described.   

At the heart of the matter are a some rather important requirements for our interface to work.  One of them is that the implementation has to run in a transaction.  I’m now wondering if its ever possible to have a middle tier do “absolute” business logic verification outside a transaction?

Anyway, here are my rules: 

1.       Before applying validation, start a transaction

2.       Retrieve the current copy of the record from the database.

3.       Verify none of the unchangedList field have changed (send errors if so).

4.       Apply the updated field values to the record

5.       Perform validation.

6.       Update if OK, send error (s) if not.

7.       End transaction

Some comments: 

The idea behind all this is to let the user know that something in the database had changed (due to someone else’s action) from what is on their screen when they pressed “enter”.

We liked the model as it worked over the web, and for different technologies across tiers.

The requirement to seed the validation buffer with the existing values is obviously how we ensure a “proper” validation.  Can this notion fit into a multi tier ORM mapper model? 

It is the transaction that avoids your described problem of the integrity checks with other object fields. 

I’m now thinking hard about the meaning and appropriateness of the unchanged list.  One simplistic observation is that your “A” example below would be a programming error:  if the changes submitted depend on the value of A not being changed, then it should have been in the list in the first place.  This leads to the question, suppose it isn’t in the list… shouldn’t the business logic validation catch it anyway?  Perhaps so, in which case this makes me wonder if the list is needed at all…. Just make the changes and let the validation do its thing.  Maybe this unchangedList is “just” a kind of optimization of optimistic update implementation that can be pushed up the object hierarchy into a base class and automated; this makes it a kind of metadata, except that it can be specified on an update-by-update basis. 

Here is something more interesting to me, that I hadn’t fully considered before:  Should you, in principle, always validate everything on any change?

If I am inclined to answer yes to this question, then my unchangedList becomes only a list of things that the user wants to be notified of that have changed, rather than a list to guide or help or optimize validation.  If that’s the case, then the only purpose it would serve is to allow the UI to say:

“if the changed field is not in this list, then perform the update regardless (as long as validation is ok)”.  At that point, the list becomes a finer grained version of your “modified” column, without that column needing to exist (at least for this purpose. Audits are a different matter).  By itself this feature seems quite useful.  I can imagine changes to different parts of large records that can safely be performed independently. 

I’m not sure where this conversation is going, but I’m constantly amazed at how something conceptually simple (an update) has these layers of complexity.  Don’t worry about a deep response if you’re too busy, this seems to have gotten a bit long-winded!!!!  If we use DevForce it becomes somewhat academic, as we’ll go with your strategy.  Which leaves me with one more important question:

If we want to do optimistic locking, we must have a “changed” field that you can track.  If we don’t have that today, we’ll need to add it to most of our tables.  Yes?

If we don’t make that change, all of our updates will be last one wins (because you won’t hold locks)? (oops, two questions)

Back to Top
IdeaBlade View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 30-May-2007
Location: United States
Posts: 353
Post Options Post Options   Quote IdeaBlade Quote  Post ReplyReply Direct Link To This Post Posted: 20-Jul-2007 at 3:10pm
 

Yes, this is fundamentally harder than it seems. In this it is like multi-threaded programming - the simple and obvious answer works 99.99% ... but that rare - often undetected - failure causes enormous heartburn somewhere far from the source of the original injury.

 

I would like to be able to follow the thread of your reasoning. That would take time and I'd like to get back to you right away on your final two questions:

 

"If we want to do optimistic locking, we must have a “changed” field that you can track.  If we don’t have that today, we’ll need to add it to most of our tables.  Yes?

If we don’t make that change, all of our updates will be last one wins (because you won’t hold locks)? (oops, two questions)"

 

You don't have to have a concurrency column on every table - only on those tables that you use to detect and signal concurrency violations. DevForce will apply optimistic concurrency checking on those tables and "last one wins" on the others.

 

Let me illustrate: Suppose I have such a column on Order but not on OrderDetail.

Scenario #1: I change an order; you change the same order; I save, you save; DF detects the problem and tells you.

 

Scenario #2: I change an orderDetail; you change the same orderDetail; I save, you save; DF is oblivious and your save trumps mine.

Is this bad? Not necessarily!

 

Let me add another rule: "Everytime I change an orderDetail, I must also change its parent Order." This rule implies that a change to an order's detail items is tantamount to a change to the order itself. 

 

I happen to like that logic as I'll explain in a minute. But let's just run with this for now and replay Scenario #2:

Scenario #2b: I change an orderDetail which changes the parent order; you change the same orderDetail which changes the same parent order; I save, you save; DF detects the collision on order, fails your transaction, neither the order nor the detail are saved, and you hear all about it.

This is exactly what I want and, in fact, catches another slippery bug that neither of us considered in the discussion below (I left it out deliberately because there was enough to worry about).

Scenario #3:  I change an orderDetail ; you change its parent order; I save, you save; DF detects the collision on order, fails your transaction, your order change is rejected.

Observe that, in the absence of the rule, "changing an OrderDetail changes its parent Order", none of the mechanisms either of us described below would have caught that particular concurrency violation. Yet surely the Order is fundamentally changed if I modify/add/delete any of its OrderDetails. I should be wondering what you are doing changing RequiredDate on my order while I was changing the quantity of Acme Widgets?

 

We (you and I as sales reps working on the order) are clearly out-of-sync and the application should alert us to that fact.

 

In my solution, I've made Order the "master" object in a dependent object graph (there's a UML term for this but let's not be pedantic). Order is "soft locking" its OrderDetails. The master object provides effective concurrency control over objects in its dependent graph. Therefore, those dependent objects do not need their own concurrency column.

 

This works as long as all processes that can save data to the database play by the rules:

Always save member of the object graph within a transaction (i.e., can't save an orderDetail on its own).

Always "dirty" the parent when you modify/add/delete a child (or grandchild).

 

It follows that any object can play the "master" object role.

 

Suppose you couldn't add a concurrency column to Order. Fortunately, you can create a table to hold some kind of object to serve as "the master of the master".

 

Let's abstract this and dedicate a table dedicated to soft locking of any table in our database. Each of it's rows is a tuple: {ObjectType, ObjectPrimaryKey, RowVersion, UserId, ...}.

 

Every time I update an order, I try to acquire the SoftLock object for that order. If there isn't one, I create one. I now include this in my transaction with the Order and OrderDetail changes.

 

You see where this leads.

 

P.S.: This technique imposes a miniscule performance impact and is a small addition to the business logic that can be encapsulated in one place, either in a base entity class or via some kind of "mix-in" approach.

 

===

 

You may also see that there is an opportunity to use the SoftLock table as a non-blocking CheckOut mechanism.  Joe can check out the Order before working on it. Sam's client detects the checkout and tells Sam that Joe has it.  Sam can move on to something else, call Joe, or ... steal it from Joe by overwriting his SoftLock object.

 

This last alternative may seem sneaky. It may also be necessary:

Joe is at lunch; there's a crisis; Sam pitches in and gets it done; Joe returns ... tries to finish the order ... and discovers that Sam stole the order. He may be pissed but ... that's another story.

At least one of our customers uses this technique with considerable satisfaction. It provides safety and information ... and is non-blocking.

 

===

 

So that's my "quick" answer. To summarize:

optimistic concurrency only works for tables that have a suitable concurrency column

not every table has to be under concurrency control

if you can't modify a table, you can check it for concurrency by subordinating it in a transaction to another table

you can implement "master/detail" concurrency checking and "CheckOut" control via this mechanism

 

Hope this all makes sense.

 

 

 

 

Back to Top
 Post Reply Post Reply

Forum Jump Forum Permissions View Drop Down