===== git log ====
commit 74f93a8662858a3b41e0c153cc0976ea76ff1eae
Author: Rinku Kothiya <rkothiya@redhat.com>
Date:   Tue Mar 30 10:34:44 2021 +0530

    doc: Updated release 9.1 notes (#2302)
    
    Added
    * provided option to enable/disable storage.linux-io_uring during compilation
    * Healing data in 1MB chunks instead of 128KB for improving healing performance
    
    Updates: #2301
    
    Change-Id: Iae49287cca00681426b4ecac85f1122912492ed5
    Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>

commit b313a20f342b74809972ed308c1415769a881fc5
Author: Ravishankar N <ravishankar@redhat.com>
Date:   Mon Mar 29 11:05:13 2021 +0530

    afr: make fsync post-op aware of inodelk count (#2273) (#2297)
    
    Problem:
    Since commit bd540db1e, eager-locking was enabled for fsync. But on
    certain VM workloads wit sharding enabled, shard xlator keeps sending
    fsync on the base shard. This can cause blocked inodelks from other
    clients (including shd) to time out due to call bail.
    
    Fix:
    Make afr fsync aware of inodelk count and not delay post-op + unlock
    when inodelk count > 1, just like writev.
    
    Code is restructured so that any fd based AFR_DATA_TRANSACTION can be made
    aware by setting GLUSTERFS_INODELK_DOM_COUNT in xdata request.
    
    Note: We do not know yet why VMs go in to paused state because of the
    blocked inodelks but this patch should be a first step in reducing the
    occurence.
    
    Updates: #2198
    Change-Id: Ib91ebdd3101d590c326e69c829cf9335003e260b
    Signed-off-by: Ravishankar N <ravishankar@redhat.com>

commit fc1b424cc07572066d7bb4f9064664946842d70a
Author: Ravishankar N <ravishankar@redhat.com>
Date:   Mon Mar 29 10:59:12 2021 +0530

    configure: add linux-io_uring flag (#2060) (#2295)
    
    By default, if liburing is not present on the machine where gluster rpms are

More commit messages for this ChangeLog can be found at
https://forge.gluster.org/glusterfs-core/glusterfs/commits/v9.1
