How are USB peripherals' bIntervals enforced?

只谈情不闲聊 提交于 2019-12-25 02:44:33

问题


I have a FullSpeed USB Device that sends a Report Descriptor, whose relevant Endpoint Descriptor declares a bInterval of 8, meaning 8ms.

The following report extract is obtained from a USB Descriptor Dumper when the device's driver is HidUsb:

Interface Descriptor: // +several attributes
------------------------------
0x04    bDescriptorType
0x03    bInterfaceClass      (Human Interface Device Class)
0x00    bInterfaceSubClass   
0x00    bInterfaceProtocol   
0x00    iInterface

HID Descriptor: // +bLength, bCountryCode
------------------------------
0x21    bDescriptorType
0x0110  bcdHID
0x01    bNumDescriptors
0x22    bDescriptorType   (Report descriptor)
0x00D6  bDescriptorLength

Endpoint Descriptor: // + bLength, bEndpointAddress, wMaxPacketSize
------------------------------
0x05    bDescriptorType
0x03    bmAttributes      (Transfer: Interrupt / Synch: None / Usage: Data)
0x08    bInterval         (8 frames)

After switching the driver to WinUSB to be able to use it, if I repeatedly query IN interrupt transfers using libusb, and time the real time spent between 2 libusb calls and during the libusb call using this script :

for (int i = 0; i < n; i++) {
    start = std::chrono::high_resolution_clock::now();
    forTime = (double)((start - end).count()) / 1000000;

    <libusb_interrupt_transfer on IN interrupt endpoint>

    end = std::chrono::high_resolution_clock::now();
    std::cout << "for " << forTime << std::endl;

    transferTime = (double)((end - start).count()) / 1000000;
    std::cout << "transfer " << transferTime << std::endl;

    std::cout << "sum " << transferTime + forTime << std::endl << std::endl;
}

Here's a sample of obtained values :

for 2.60266
transfer 5.41087
sum 8.04307     //~8

for 3.04287
transfer 5.41087
sum 8.01353     //~8

for 6.42174
transfer 9.65907
sum 16.0808     //~16

for 2.27422
transfer 5.13271
sum 7.87691     //~8

for 3.29928
transfer 4.68676
sum 7.98604     //~8

The sum values consistently stay very close to 8ms, except when the time elapsed before initiating a new interrupt transfer call is too long (the threshold appear to be between 6 and 6.5 for my particular case) in which case it's equal to 16. I have once seen a "for" measure equal to 18ms, and the sum precisely equal to 24ms. Using an URB tracker (Microsoft Message Analyzer in my case), the time differences between Complete URB_FUNCTION_BULK_OR_INTERRUPT_TRANSFER message are also multiples of 8ms - usually 8ms. In short, they match the "sum" measures.

So, it is clear that the time elapsed between two returns of "libusb interrupt transfer calls" is a multiple of 8ms, which I assume is related to the bInterval value of 8 (FullSpeed -> *1ms -> 8ms).

But now that I have, I hope, made it clear what I'm talking about - where is that enforced ? Despite research, I cannot find a clear explanation of how the bInterval value affects things.

Apparently, this is enforced by the driver.

Therefore, is it :

  • The driver forbids the request from firing until 8ms have passed. Sounds like the most reasonable option to me, but from my URB Trace, Dispatch message events were raised several milliseconds before the request came back. This would mean the real time the data left the host is hidden from me/the message analyzer.

  • The driver hides the response from me and the analyzer until 8ms have passed since the last response.

If it is indeed handled by the driver, there is a lie somewhere to what's shown to me in the log of exchanged message. A response should come immediately after the a request, yet this is not the case. So, either the request is sent after the displayed time, or the response comes earlier than what's displayed.

How does the enforcement of the respect of the bInterval work ?

My ultimate goal is to disregard that bInterval value and poll the device more frequently than 8ms (I have good reason to believe it can be polled up to every 2ms, and a period of 8ms is unacceptable for its usage), but first I would like to know how its current limitation works, if what I'm seeking is possible at all, so I can understand what to study next (ex. writing a custom WinUSB driver)


回答1:


I have a FullSpeed USB Device

Careful: Did you verify this? The 8ms are the limit for low speed USB devices - which many common mice or keyboards may still be using.

The 8ms scheduling is done inside the USB host driver (ehci/xhci) AFAIK. You could try to gamble this by releasing and reclaiming the interface - not tested, though. (Edit: Won't work, see comment).

An USB device cannot talk on its own, so it has to be the request that is delayed. Note that a device can also NAK any interrupt IN requests when there is no new data available. This simply adds another bInterval ms to the timing.

writing a custom WinUSB driver

Not recommended - replacing a windows supplied driver is quite a hassle. Our libusb-win32 replacer for an USB CDC device breaks on all big windows 10 upgrades - the device uses a COM port instead of libusb once the upgrade is finished.



来源:https://stackoverflow.com/questions/54670280/how-are-usb-peripherals-bintervals-enforced

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!