[PATCH blk-next 1/2] blk-mq-rdma: Delete not-used multi-queue RDMA map queue code
Sagi Grimberg
sagi at grimberg.me
Tue Sep 29 14:24:49 EDT 2020
>>> From: Leon Romanovsky <leonro at nvidia.com>
>>>
>>> The RDMA vector affinity code is not backed up by any driver and always
>>> returns NULL to every ib_get_vector_affinity() call.
>>>
>>> This means that blk_mq_rdma_map_queues() always takes fallback path.
>>>
>>> Fixes: 9afc97c29b03 ("mlx5: remove support for ib_get_vector_affinity")
>>> Signed-off-by: Leon Romanovsky <leonro at nvidia.com>
>>
>> So you guys totally broken the nvme queue assignment without even
>> telling anyone? Great job!
>
> Who is "you guys" and it wasn't silent either? I'm sure that Sagi knows the craft.
> https://lore.kernel.org/linux-rdma/20181224221606.GA25780@ziepe.ca/
>
> commit 759ace7832802eaefbca821b2b43a44ab896b449
> Author: Sagi Grimberg <sagi at grimberg.me>
> Date: Thu Nov 1 13:08:07 2018 -0700
>
> i40iw: remove support for ib_get_vector_affinity
>
> ....
>
> commit 9afc97c29b032af9a4112c2f4a02d5313b4dc71f
> Author: Sagi Grimberg <sagi at grimberg.me>
> Date: Thu Nov 1 09:13:12 2018 -0700
>
> mlx5: remove support for ib_get_vector_affinity
>
> Thanks
Yes, basically usage of managed affinity caused people to report
regressions not being able to change irq affinity from procfs.
Back then I started a discussion with Thomas to make managed
affinity to still allow userspace to modify this, but this
was dropped at some point. So currently rdma cannot do
automatic irq affinitization out of the box.
More information about the Linux-nvme
mailing list