08:30:15 #startmeeting 08:30:15 Meeting started Fri Jun 6 08:30:15 2014 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:30:15 Useful Commands: #action #agreed #halp #info #idea #link #topic. 08:30:30 xavih: let us wait for Dan to join in 08:30:38 ok , no problem 08:30:40 who else do we have here today? 08:31:15 xavih, Hi. This is kp. I work primarily on glusterd 08:31:45 krishnan_p: nice to meet you :) 08:32:49 xavih: Pranith here, work on afr :-) 08:33:01 xavih: Dan is having a bit of problems with his laptop. Should be in here soon. 08:33:04 pranithk: Oh, really :D hehe 08:33:30 xavih: :-) 08:33:44 xavih: Raghavendra here. Currently working on snapshots 08:34:31 there comes Dan 08:35:05 * ndevos is here, but currently working on a *cough* Xen *cough* kernel bug 08:35:19 ndevos: good luck :) 08:35:29 hagarth: hehe, thanks 08:35:35 shall we get started? 08:35:52 hagarth: +1 08:35:54 yes 08:36:26 Xavi, we were wondering if you could walk us through a write operation, and we could ask questions as we go 08:37:03 ok, I can try... 08:37:24 do we use the latest code review ? (pushed yesterday) 08:37:39 sure 08:37:52 we have it :) 08:38:02 ok then 08:38:32 the entry point is easy: ec_gf_writev() on ec.c 08:39:01 here I only call the real write function with some additional parameters 08:39:19 I'll only comment the interesting ones, the other should be obvious 08:39:40 xavih: sure 08:39:49 the third parameter is a bitmask of subvolumes to which the request should be sent 08:39:59 in this case -1 means all 08:40:25 Ok 08:40:25 each bit refers to a subvolume in the order defined in the volfile 08:40:37 ok 08:41:23 fourth argument says how many answers are needed at minimum to consider the result valid 08:41:54 answers are grouped looking at the ret code, errno, xdata and other things 08:42:23 xavih: what is the usual value for the fourth argument? 08:42:54 a group of combined answers will only be considered a valid answer for this request if it's formed by, at least, the minimum number of individual answers specified in thin argument 08:43:06 hagarth: it depends on the request 08:43:44 for example 08:44:20 normal requests like readv, writev, truncate, unlink, ... all use EC_MINIMUM_MIN 08:44:50 xavih, can we think of a grouping of response as a tuple defined by (op_ret, op_errno, xdata) 08:44:52 this means that at least N (bricks) - R (redundancy) subvolumes must agree on the answer 08:45:04 this can be seen as a quorum enforcement 08:45:13 xavih: right 08:45:51 krishnan_p: yes, but it also checks other things like iatt or other cbk arguments, depending on the request 08:46:16 xavih: what will happen when at the time of winding quorum number of bricks are up but it succeeded in less number of bricks? 08:46:21 xavih: why is EC_MINIMUM_MIN -2 ? rather, what is the significance of -2? 08:46:46 in some cases the minimum is a must, for example on read requests, because if less that N - R are available, it's impossible to generate an answer 08:47:37 pranithk: the request will be sent, and when it's detected that there aren't enough combinable answers, an EIO will be reported to the caller 08:48:12 xavih: but the data is written on some of the bricks... self-heal handles it is it? 08:48:27 hagarth: its only because it's determined later when the request is initiated. This could have been taken from ec->fragments 08:49:11 hagarth: EC_MINIMUM_ALL can only be determined when the operation begins (it depends on alive bricks, successful locks and successful preop) 08:49:27 hagarth: I used constants for the other cases only to be consistent 08:49:44 xavih: ok 08:49:45 hagarth: and avoid having to access ec in ec_gf_xxx() functions 08:50:12 pranithk: if the data is written to enough bricks (i.e N - R at least), self heal will recover it 08:50:40 xavih: in the case where it is not, what will happen to the partial write? 08:50:50 xavih, what is the type of 'ec'? Is it ec_fop_data_t? 08:50:51 pranithk: however if, for example, there are N - R bricks alive and one of them fails the write, currently the data is irrecoverable 08:51:05 krishnan_p: ec_t 08:51:13 xavih: hmm... 08:51:25 krishnan_p: it's the private data from this->private 08:51:36 pranithk: I don't know how to solve this situation... 08:51:56 xavih: ok we shall see about it later... please continue writev from where we left off... 08:52:05 ok 08:52:09 xavih, OK 08:52:37 lets continue the flow 08:53:06 the minimum argument takes importance on self heal, where some requests are valid even with one valid answer 08:53:30 fifth argument is the callback function to be called when the fop is finished. It can be NULL. For normal fops it's the default defaule__cbk() function 08:53:42 ok 08:54:28 sixth argument is any data to be attached to the fop (used on self-heal) 08:54:51 the remaining args are the normal writev arguments 08:55:02 dlambrig: You should talk to the guy from ceph about how they handle partial failures... 08:55:52 ec_writev() in ec-inode-write.c prepares the requets 08:56:26 it first calls to ec_fop_data_allocate() that creates the fop_data_t structure that will be used though all the fop processing 08:57:10 do you want I detail the arguments of this function ? 08:57:16 pranithk: The Ceph engineer is Looc Dachary and he is a very good resource for us, he is not yet a RH employee but will be soon. 08:57:27 xavi- yes, please do 08:57:31 ok 08:57:31 pranithk: Loic is actually in #gluster-dev atm 08:57:31 that is a key function 08:58:06 third argument is the fop type. Used basically for logging 08:58:11 next one are flags 08:58:48 they say if the fop needs locking (inode or entry), preop handling 08:59:04 it also says to how many subvolumes the request must be sent 08:59:16 is that the 2? 08:59:52 what is the 2? :) 09:00:08 no, flags also say what "things" must be merged in combined answers. It can be a dict, a loc, etc 09:00:36 since there can be multiple iatt answers, that 2 says how many iatt must be combined from answers 09:00:52 in this case, the write callback receives 2 iatt structures that must be merged 09:01:06 xavih, how do we determine we need to combine only 2 of them? 09:01:21 krishnan_p: looking at the callback argument list :) 09:02:02 krishnan_p: all iatt on an answer must agree to be combined 09:02:25 otherwise it means that the brick has had some problem and it's not in stnc with others 09:02:32 s/stnc/sync/ 09:03:12 is this clear ? 09:03:17 xavih: combining 2 iatt structures in writev_cbk, do you mean prebuf and postbuf? 09:03:37 raghu: yes, in this case it corresponds to prebuf and postbuf 09:03:43 xavih, what would be the behaviour if the iatt's didn' agree? ie. not in sync 09:04:12 krishnan_p: then the answers won't be combined. They will belong to two different groups 09:05:18 xavih, OK. So, does the combining operation take care of whether the responses (answers) are in sync? 09:06:02 krishnan_p: yes. This is done to detect inconsistent bricks and initiate self-heal on them when necessary 09:06:49 it's ok to continue with next arg ? 09:07:09 yes 09:07:16 xavih, if after the combine if we don't receive N-R answers in any of the group, then we fail the writev? 09:07:59 krishnan_p: yes. This is what pranithk said. I don't know how to solve this situation 09:08:06 in the current implementation I return EIO 09:08:16 xavih, oops. OK 09:08:34 target and minimum are already explained 09:09:36 next one says how many answers are expected to be received. Now that I've seen it I see that it's something old and probably I could remove this one... 09:09:57 I think I always use the same value... I'll review later... 09:10:12 next one is the function to be called to wind the request to each subvolume 09:10:38 except for write, it's a straightforward STACK_WIND 09:10:56 next one is the function that will control the live cycle of the fop 09:11:03 it's basically a state machine 09:11:24 callback and data come from ec_gf_writev() 09:11:31 any question on these arguments ? 09:11:50 not for me, we will get to the state machine internals shortly 09:11:56 yes 09:11:57 ok 09:12:32 if ec_fop_data_allocate() fails, the callback function is called with an EIO 09:12:50 otherwise, fop structure is populated with writev arguments 09:13:06 this is what ec_fop_data_set_xxx() does 09:13:20 OK 09:13:30 finally, ec_manager() is called to begin the processing of the requets 09:13:37 now the fun begins.. 09:13:53 it's important to note that the second argument of ec_manager() is an error code 09:14:16 if some of the ec_fop_data_set_xxx() failed, the operation will be initiated with an EIO error 09:14:45 let's go to ec_manager() on ec-common.c 09:14:52 xavih: ok 09:15:17 EC_STATE_INVALID is 0 (the value that will have an uninitialized fop 09:16:03 every state in the state machine can have two "flavors": when there is an error and when there is not 09:16:27 any positive state means everything is ok. a negative state means some error happened 09:17:11 then, if the fop needs to lock an inode or entry, the owner of the stack frame is set to a different value for each request 09:17:33 __ec_manager() is the core of the state machine 09:18:05 first it handles the error code. If there is an error code, the state is negated (to indicate an error) 09:18:45 then the handler specified in the call of ec_fop_data_allocate() is called to manage the states of the fop 09:19:24 when it returns (will see that function later), if the state is EC_STATE_INVALID, it means that the state machine has finished and it is released 09:19:56 if not, ec_wait() waits until any possible subrequests initiated by the fop->handler() are completed 09:20:10 it returns the error code from that subrequests 09:20:19 and the next state is executed 09:20:27 any question here ? 09:20:39 xavih, does ec_wait block until the subrequest issued by the handler returns? 09:21:17 xavih, or did you mean that the fop 'wait' in the same state? 09:21:31 krishnan_p: no, it does not block 09:22:13 if there are pending requests, -1 is returned and it exists the loop (it will call again __ec_manager when the request finishes) 09:22:23 it's not a synchronous wait 09:22:36 ec_wait() always returns immediately 09:22:47 right ? 09:22:58 do you want to look at ec_wait() now ? 09:23:01 xavih: Will it lead to a busy loop? 09:23:32 pranithk: no, if there is pending work, it will return -1, and __ec_manager() will quit the loop 09:23:47 xavih, No. I got it. I wanted to understand what you meant when you said it will 'wait'. So I assume you meant that the fop waits in the same state until the subrequests return 09:24:02 if there isn't pending work, it will take the error code from subrequests executed and return it 09:24:28 krishnan_p: yes, I haven't used the right expression, sorry :P 09:24:31 xavih: I don't understand why this piece of code needs to be in a do - while loop 09:25:14 because if the fop->handler() has not inited any subrequest, ec_wait() will not have anything to wait for 09:25:30 so __ec_manager() should go to the next state immediately 09:25:46 noone will call __ec_manager() again for this fop because there is no pending work on it 09:26:15 pranithk: do you understand the reason ? 09:26:22 xavih: nope :-( 09:26:26 xavih: thinking... 09:26:36 for example 09:26:38 xavih: what is a subrequest? 09:26:44 the first state is EC_STATE_INIT 09:26:45 xavih: go ahead with the example.... 09:27:07 many fops do nothing here, only modify or store some data inside fop_data_t 09:27:42 xavih: true 09:27:59 the it return the next state to which the machine should go 09:28:08 this is returned by fop->handler() 09:28:18 xavih: yes 09:28:38 in this case, ec_wait() will return immediately because the EC_STATE_INIT didn't started any fop 09:28:53 xavih: yes 09:28:58 so the loop will call fop->handler() again using the next state (the one just returned) 09:29:29 xavih: ah! got the loop. It is running the state machine :-) 09:29:39 yes 09:29:54 xavih: but I wonder how it handles winds/unwinds...? 09:29:55 the state now can be EC_STATE_DISPATCH. In this state STACK_WIND() will be called to send the request to subvolumes 09:30:30 in this case, when ec_wait() is called it will detect that there are pending requests on the fop, and will return -1 09:30:55 ec_wait() handles winds and subrequests (other fops) 09:31:27 lets walk through the first fop - lock 09:31:38 in this case, at some point the last called wind will unwind. When this happend, __ec_manager() will be called again to resume the state machine 09:31:51 xavih: understood :-) 09:31:59 xavih: thanks for the detailed explanation 09:32:35 ok, now we go to ec_manager_writev() in ec-inode-writev(), that is the handler for writev 09:32:46 sorry, ec-inode-wrietv.v 09:33:00 it receives the fop and the current state 09:33:43 for EC_STATE_INIT, it basically prepares the write buffers and transforms offsets and sizes 09:33:56 do you need any clarification here ? 09:34:18 xavih: none for me 09:34:44 xavih, No. 09:35:09 all state machine handlers calls ec_default_manager() to handle common state transitions 09:35:31 we may go there later if needed, ok ? 09:35:48 or maybe it would be better to look at it now ? 09:35:59 what is the next state after INIT 09:36:09 we will go there if it lives in default_manager 09:36:30 it depends of the flags specified for the fop. It's defined in ec_default_manager()... :P 09:36:37 ok, let's go there... 09:37:04 ec_default_manager() on ec-common.c 09:37:27 on INIT, if flag EC_FLAG_LOCK is set, it jumps to EC_STATE_LOCK, otherwise it jumps directly to EC_STATE_DISPATCH 09:37:46 in the case of wrietv, EC_STATE_LOCK is set, so we go to EC_STATE_LOCK 09:38:16 ec_manager_writev() does nothing in this state, so it simply executes the code form ec_default_manager() 09:38:34 here it calls to ec_lock() (will see that in a moment) 09:38:58 then, if the flag EC_STATE_PREOP is set, we jump to EC_STATE_PREOP, otherwise to EC_STATE_DISPATCH 09:39:06 I think you see the logic, right ? 09:39:14 now, ec_lock() 09:39:27 xavih, yes. 09:39:43 one minute, please... 09:40:55 sorry, I'm here 09:41:00 ok 09:41:16 in ec_lock() it looks at the flags what it should lock 09:42:00 this only happens if the current fop is not initiated as a subrequest of another fop (in which case the first fop should have locked whatever necessary) 09:42:26 this is tested looking if fop->parent is NULL or not 09:42:47 EC_FLAG_LOC_xxx are somewhat complex 09:43:36 xavih, that is fine. This means its interesting too! 09:44:05 they are used for two purposes. It indicates if the inode or entry must be blocked and also if the inode or entry should be marked when some subvolumes return mismatching answers 09:44:58 when not all subvolumes agree on the same answers, the subvolumes that belong to answers groups with less members are marked as bad to avoid using them on future requests 09:45:15 xavih, is the behaviour same even for fd based operations? 09:45:21 self-heal clears these marks when it's healed 09:45:23 krishnan_p: yes 09:46:44 depending on these flags, ec_lock_entry() or ec_lock_inode() are called 09:47:38 when you do the "mark", is it persistent? (i.e. stored in an extended attribute) 09:47:55 both look to see if that lock is already acquired by this fop, and if not, it initiates a subrequests and adds an entry into fop->lock_list 09:48:11 this list is later used to do unlocks 09:48:21 dlambrig: no, it's stored only in memory 09:48:49 if client crashes, the first access to the same entry/inode will detect again the discrepancy and mark it again 09:48:57 unless self-heal has already solved it 09:49:33 any doubt on locking functions ? 09:50:08 ok 09:50:12 xavih: if a different client heals, how does the mark get cleaned up? 09:51:07 hagarth: self-heal is currently done on client side, so when self-heal detects that it's all ok, it clears the mark from that client 09:51:40 xavih: what happens if more than one client attempt to self-heal? 09:51:49 and all clients have marked? 09:52:45 hagarth: the metadata is healed using locks, so only one client can heal at a time. The second client will see a healed inode and clear the mark 09:53:00 xavih: right, that is along expected lines. 09:53:04 however I've just seen a possible problem with data healing... I'll look at it... 09:53:29 xavih: ok 09:53:32 xavih, so what happens to the second self-heal when the first self-heal is still in progress 09:53:38 onle one client will heal data, but another one can assume it's healed before it really is, I think... 09:54:21 krishnan_p: it waits. But the locks are very short, only to heal the metadata. Then they are unlocked 09:54:32 data self-heal is made locking the file in fragments 09:54:41 xavih, is this a synchronous wait? 09:54:42 xavih: ok 09:54:57 after we finish the write flow, perhaps in another meeting, it would be good to discuss heal. 09:55:08 we can continue with write for now tho 09:55:08 krishnan_p: yes, but only for self-heal. normal fop execution and self-heal are asynchronous 09:55:19 xavih, ok 09:55:21 ok 09:55:50 so, when locks finish, ec_locked() is called 09:56:05 ok.. 09:56:36 here I only update parent fop (the current write) with valid subvolumes in case the lock failed on some of them 09:56:49 this way the write won't be sent to bricks that failed to lock the inode 09:56:59 the next state is EC_STATE_PREOP 09:57:03 hang on 09:57:05 :) 09:57:26 lets go into how the second fop is create (the lock state machine), and the relationship of parent to child 09:57:49 ah, ok 09:57:56 this is a cool part 09:58:22 the fop_data_t of each fop is attached to the frame created for that request (stored into fop->frame) 09:58:55 when a new fop is created using ec_fop_data_allocate(), the first parameter is a frame 09:59:33 ec_fop_data_allocate() looks at frame->local to see if this is a subrequest (a top level call will have frame->local == NULL) 09:59:53 if it's not free, frame->local is assumed to be the parent of the new fop 10:00:45 when a fop is a child of another one, it increases refs and jobs count of parent (this is used on ec_wait() later to know that there is pending work) 10:00:50 so the target and minimum for a child fop are the same as the parent? 10:01:26 not necessarily. It depends on the arguments specified in each ec_fop_data_allocate() 10:01:32 Ok 10:02:00 anything else on this parent/child binding ? 10:02:10 so another state machine is invoked 10:02:16 the lock state machine 10:02:20 yes 10:02:31 xavih, how does the child state machine give control back to the parent state machine? 10:02:37 it initiates a full new fop 10:04:05 krishnan_p: when the child state machine finishes, it will call ec_fop_data_release() for the last time. In this case, ec_parent_resume() is called 10:04:34 that basically restarts the state machine of the parent 10:04:41 xavih, OK 10:04:59 can we walk through the callback flow of the lock request 10:05:51 ok, we can go there. There are some interesting points there 10:06:34 at a high level, when a blocking lock (of any type) is requested, it is transformed to a non-blocking request and sent to all subvolumes in parallel 10:07:42 if any of the subvolumes returne EAGAIN (meaning that the lock cannot be immediately acquired), all locked volumes are unlocked and the lock request is restarted in blocking mode but sending the request one by one to the subvolumes 10:08:24 do you want to see this in detail through the code ? 10:08:29 are you in ec_lock_check ? 10:09:26 yes, here is where the logic is processed 10:09:37 ok, 10:09:47 do you need more detail in some point ? 10:10:46 xavih, could you give an overview of the states a blocking lock fop which failed to acquire the lock on all servers when tried non-blocking? 10:10:54 locking functions have the special thing that they use EC_MINIMUM_ALL to enforce that a good answer is only accepted if all alive subvolumes agree 10:11:39 later, if only N - R are got, it's also accepted, but this is a specific managing of locks 10:12:20 is it all_alive_subvols && (resp_count >= N_R) 10:12:31 s/N_R/N-R 10:12:36 krishnan_p: if any or all subvolumes failed with EAGAIN, it will have notlocked != 0 10:13:31 krishnan_p: sorry, I don't understand... 10:14:13 xavih, you said that N-R replies are enough for a 'good' answer, right after saying all alive volumes need to agree on the response for the lock request 10:14:29 so, I was wondering if both these conditions were required to be met 10:14:47 s/volumes/subvolumes 10:15:05 when a filed non-blocking lock is processed on ec_lock_check(), it returns the mask of subvolumes that must be unlocked (i.e. the lock succeeded) and return -1 to indicate that the lock should be restarted in incremental mode 10:16:03 krishnan_p: EC_MINIMUM_ALL means that the handling of answers combination will only accept as a good a group of answers formed by all living subvolumes 10:16:38 xavih, ok 10:16:38 if there are more than one group, none of them will satisfy the condition. In this case, the callback function of ec_inodelk() will return an EIO error 10:17:11 however the callback function handles this case in a special way. It looks at all groups and looks why they failed and decides what to do 10:18:04 it's here when it can decide that even if there isn't a group that contains all alive bricks, one of the groups can be taken as the valid one 10:18:34 this happens when notlocked == 0 10:18:49 fop->answer contains the answer that it has been accepted 10:19:33 if will be NULL if not all bricks agree, but if some of the answers are enough (i.e. form a group of more than N - R) it will be accepted as the good answer... 10:19:52 I think I'm complicating this more than necessary, sorry... 10:20:05 I'll have to think about a simpler way to explain it... 10:20:30 anything else on locking ? 10:20:40 lets discuss the callback function lock ues 10:20:53 ec_lock_check() ? 10:21:12 is that the callback? 10:21:22 from ec_inodelk_cbk? 10:21:37 well, I think I didn't use the right word... 10:22:09 the callback from ec_inodelk() will receive the final result of the lock (all this I've explained is internal fop management) 10:22:59 For example , ec_entrylk_cbk() 10:22:59 the callback I was referring to is the ec_lock_check(), that is called from ec_manager_inodelk() at state EC_STATE_REBUILD. 10:23:29 xavih, I think i understood what you explained with regards all alive bricks vs N-R to decide a good answer. But let me clarify offline. 10:23:39 the REBUILD state is processed just before calling the callback function to regenerate or "correct" the answer 10:23:49 krishnan_p: ok 10:24:36 lock functions use the REBUILD state to decide if the answer that have been combined should be sent to the callback or not 10:25:02 for example. The first execution of a blocking lock will be translated to a non-blocking lock 10:25:20 the answer of this request will arrive to REBUILD state 10:25:26 when is ec_entrylk_cbk() called, 10:26:11 ah, ok, this is the callback of the WIND, not the callback of the fop 10:26:31 sorry to mess all this, I didn't understood your question :-/ 10:26:31 yes 10:26:54 ok, ec_entrylk_cbk() will be called for every WIND call you made 10:26:56 each individual subvolume gets a wind, and a callback.. this is a part I meant 10:27:40 there, as any other fop, it will construct a cbk_data_t structure will all arguments (similar to ec_fop_data_t) 10:28:18 the only interesting thing here is that ec_lock_handler() will determine if the current answer is valid or not. 10:28:32 what does ec_complete() do? :) 10:28:59 your variable fop->winds is the number of subvolumes you have sent to 10:29:13 if it reaches 0, you can "report" ? 10:29:40 basically it looks if the current operation is being made incrementally or not. If not normal ec_combine() is done, otherwise, any failure other than ENOTCONN is handled as a failure that does not allow the lock to complete successfully 10:29:56 ec_complete() is basically used to inform that a wind operation finished 10:30:28 I did not see the difference between ec_report() and ec_resume() 10:30:44 when all wind operations have finished it resumes the state machine execution. The result will be reported when the state reaches EC_STATE_REPORT 10:30:58 ec_resume is used to continue execution of the state machine of the fop 10:31:11 ec_report is used to call the fop callback 10:31:17 Ok 10:31:53 I think I'm a bit lost... where do you want to continue ? 10:32:44 is there any doubt on locking functions ? 10:33:05 xavih, I have a suggestion 10:33:10 all lock functions use the same logic (()finodelk, (f)entrylk, lk) 10:34:08 Why don't we send you a mail which outlines the lifecycle of a FOP, in terms of individual winds/unwinds to/from the subvols and how the responses are aggregated and sent back to the upper layers? 10:34:39 So that you can fill that skeleton/template with functions that cluster/ec uses/employs at those checkpoints in the execution 10:34:46 Does that sound OK to you? 10:35:33 The skeleton would be translator agnostic. Something like, what function is called once a response from a client subvol reaches cluster/ec for a given FOP etc. 10:35:35 I can do that, however this management is somewhat different for locking functions because they handle the normal answer in a special way to be able to restart the same request using blocking and incremental modes 10:35:47 but I think it could be easier to follow 10:36:01 first I can explain the "normal" flow and then the special case of locks 10:36:03 xavih, afr_nonblocking_inodelk and entrylk employs a similar strategy 10:36:24 yes, I know, I've used the same idea 10:36:28 xavih, so the locking algorithm seems fine. 10:36:59 The part that is different (and new to me) is the way state machines are transferring control across winds to different subvols 10:37:22 ok, I'll try to explain it better 10:37:33 an email to gluster-devel would be ok ? 10:37:44 To understand this better, it would help if we started with something more familiar and isn't different in cluster/ec as well. The lifecycle of a FOP within a xlator. 10:38:11 xavih, I will send out that mail after this meeting which you could answer to. Yes, I will CC gluster-devel 10:38:16 yes, in writev it's simpler but we have been jumping from one place to another :P 10:38:58 xavih, its hard to ignore a few questions from cropping up when something new is being explained I guess :) 10:39:24 also, lock is part of write. I see them as the same transaction, personally. 10:39:37 krishnan_p: yes, yes, I know it's difficult to understand so much thing 10:39:38 we can stop here 10:39:59 this has been extremely helpful - I have learned a lot 10:40:03 dlambrig: yes, yes, it's not your fault, but there are a lot of details and it's difficult for me... 10:40:15 xavih, But you are explanations have made our understanding better than before. With a few more meetings we should be on the same page I hope. 10:40:25 s/you are /your 10:40:26 :) 10:40:47 your explanations have been very good 10:41:00 we will take a few days to digest this latest meal you have served us :) 10:41:24 dlambrig: they generally are. Even his explanations on gluster-devel share the same clarity :-) 10:41:29 I have difficulties sorting things in the best way to be understood... 10:41:52 can you meet Tuesday? That will be my last day in Bangalore where we are all together 10:41:55 pranithk: thanks :) 10:42:08 xavih: :-) 10:42:43 I can try to do some high level explanations of general working and start from there to see details of some specific fops 10:43:47 I think the fop piece is better understood 10:44:04 could we discuss healing on Tuesday? 10:44:45 I think a generic overview of state machine states and generic meaning and purpose would be interesting to follow the details of specific and more complex fops 10:44:59 xavih, Yes. That will be of great help 10:44:59 dlambrig: as you prefer 10:45:19 xavih, we could cover the details of state machine and fops in a mail over gluster-devel 10:45:19 ok, let us catch up on Tuesday around the same time 10:45:39 krishnan_p: ok 10:45:45 cool 10:45:49 hagarth: ok, perfect 10:45:54 xavih, thanks a lot for patiently explaining 10:46:04 krishnan_p: yw 10:46:18 krishnan_p: I hope haven't bored you too much :P 10:46:26 xavih: thanks for this! 10:46:31 #endmeeting