WEBVTT 00:00.000 --> 00:09.800 So, hi everyone, I'm Lorenzo Minier, and just before we start, you may see that I have 00:09.800 --> 00:14.120 a small contraption over there, it's because I am an open source developer, but I also 00:14.120 --> 00:19.360 am a very help-based musician, and I'm managing the music production that room tomorrow. 00:19.360 --> 00:23.000 So, of course, I had to bring a food controller that is a click for my own slides. 00:23.000 --> 00:26.440 I mean, this is me, the controlling the slides, this is just fun. 00:26.440 --> 00:33.280 So, and some of you may know me as the main author of Janus, which is a Webarty-C server, 00:33.280 --> 00:37.560 and for questionable-choising and picking pictures, of course. 00:37.560 --> 00:40.600 But today, I will not be talking much about Webarty-C. 00:40.600 --> 00:45.480 I will mention Webarty-C a little, but I'll focus more on another project instead that is 00:45.480 --> 00:46.480 called I am quick. 00:46.480 --> 00:50.640 So, I'll be talking a bit about the quick protocol and the efforts that have been going 00:50.640 --> 00:55.880 on to do real-time media on top of quick as well, whether it is possible with our 00:55.880 --> 01:00.480 the existing solutions and so on, and my efforts in that direction. 01:00.480 --> 01:05.480 And I won't have much time to talk about quick in general, just to be very quick crash 01:05.480 --> 01:08.360 course if you have nothing about it. 01:08.360 --> 01:13.640 It's basically a new transport protocol that does not leave in the kernel, it leaves 01:13.640 --> 01:19.240 in the user space because it goes on top of UDP as the actual transport protocol itself. 01:19.240 --> 01:25.240 And, basically, it was conceived in order to try and get the best out of both the TCP 01:25.240 --> 01:26.880 and UDP words together. 01:26.880 --> 01:33.840 So, if you maybe a very informal definition may be TCP on steroids over UDP, if you 01:33.840 --> 01:35.720 want something like that. 01:35.720 --> 01:40.200 And while it was initially conceived, I was sorry, to fast. 01:40.200 --> 01:45.520 While it was initially designed mostly for HTTP trees, so to optimize over the different 01:45.520 --> 01:49.800 iterations of HTTP, you can put, you can basically build any application on top of that. 01:49.800 --> 01:56.640 It's a transport protocol itself, and it has built in capability, especially encryption, 01:56.640 --> 02:03.440 which helps cut down latency and so on, and one other important feature is that it supports 02:03.440 --> 02:08.720 multi-stream out of the box, which means that unlike TCP, you can have multiple streams 02:08.720 --> 02:13.080 over the same quick connection, which helps avoiding all those nasty out-of-line blocking 02:13.080 --> 02:15.560 issues that you typically have with TCP. 02:15.560 --> 02:21.320 So, again, it tries to solve many problems that TCP had, and also using UDP, it should allow 02:21.320 --> 02:25.040 for more things that TCP allowed, which brings to the next question. 02:25.040 --> 02:30.080 So, can we use quick for real-time media, because, of course, it works for HTTP 3, it works 02:30.080 --> 02:33.560 great, but real-time media is an entirely different beast. 02:33.560 --> 02:37.480 And of course, we know that WebRTC fits that bill. 02:37.480 --> 02:40.760 There are several properties that it does that makes it possible. 02:40.760 --> 02:45.520 There are some parallels that we can drive for quick as well, in terms of the different 02:45.520 --> 02:52.200 functionality that are there and so on, mostly because, again, quick is its own transport protocol, 02:52.200 --> 02:57.200 but it goes over UDP, which means that it can take some gains in terms of latency from there, 02:57.200 --> 02:58.200 as well. 02:58.200 --> 03:03.760 There are some currently big differences in terms of the final step, because, of course, with WebRTC, 03:03.760 --> 03:09.040 we have all the media stock that is provided in the browser or in our library and so on. 03:09.040 --> 03:13.400 If we want to do the same with quick, we have WebTransports to use quick and a browser, 03:13.400 --> 03:18.280 but the whole media stock we have to write ourselves using Web Codex and other things. 03:18.280 --> 03:23.320 So, that is, of course, challenging, but interior, it gives you more control to do, to do 03:23.320 --> 03:24.320 call things. 03:24.320 --> 03:30.000 So, this means that at least interior, quick is a potential candidate for real-time media, 03:30.000 --> 03:34.080 but let's see what are the actual protocols that should make it possible. 03:34.080 --> 03:39.880 And, of course, the first thing that comes to mind is why not put RTP over quick itself. 03:39.880 --> 03:45.840 We all know about RTP, it works well for our use cases, and there is indeed a draft that 03:45.840 --> 03:52.360 basically explains a bit how you can put RTP over quick to different mappings, and it focuses 03:52.360 --> 03:57.960 more on all the things that you don't really need to do with quick, because, for instance, 03:57.960 --> 04:02.960 many of the RTCP stats that were really provided to you by quick out of the box, and it focuses 04:02.960 --> 04:09.960 more on framing, mostly because, with quick, you are free to act a bit like UDP when you 04:09.960 --> 04:14.200 use data grants, and a bit more like TCP when you use streams, and so there are different 04:14.200 --> 04:18.240 ways by which you can multiply RTP packets over there, different choices depending on what 04:18.240 --> 04:19.880 you need to do. 04:19.880 --> 04:22.120 And in terms of framing, it looks a bit like this. 04:22.120 --> 04:28.680 One important functionality is that each RTP packet is also tagged with a flow identifier, 04:28.680 --> 04:34.240 and what this flow identifier is entirely up to the application, it can be flow ID3 is 04:34.240 --> 04:39.320 my video, or flow ID3 is all my RTP packets, no matter what they are. 04:39.320 --> 04:43.480 It's really up to you, and it's negotiating in different ways, and depending on the multiplexing 04:43.480 --> 04:48.200 that you're using this flow identifier, you may need to write for every packet, or just 04:48.200 --> 04:51.880 for the first one in a sequence, for instance, if you're using stream. 04:51.880 --> 04:56.680 Another important distinction is that if you're using streams, a stream is basically like 04:56.680 --> 05:01.520 a mini TCP, which means that you need to provide framing exactly as we do need to do 05:01.520 --> 05:06.520 that for RTP over TCP, which means that we need to provide a length attribute, and we don't 05:06.520 --> 05:09.720 need to do that for data grant. 05:09.720 --> 05:16.120 And I mean, RTP or a quick is interesting, but it's not really what the ITF is really 05:16.120 --> 05:20.760 interested in, because it's again an attempt to make an existing protocol and just show 05:20.760 --> 05:25.000 it on top of quick, which works for some use cases, but it's not really the optimal solution. 05:25.000 --> 05:29.240 So they are starting to work on a different approach that is called media over quick. 05:29.240 --> 05:33.240 And with media over quick, they are trying to build a new protocol that is aware of quick 05:33.240 --> 05:37.480 and can try and take advantage of all its strengths, effectively. 05:37.480 --> 05:43.880 And so they are designing a generic ingest distribution kind of protocol that whose aim is mainly 05:43.880 --> 05:48.600 configurable latency, so that it's really as generic as possible, you can use it for 05:48.600 --> 05:55.080 comprenting or broadcasting or even video on demand if you want or something like that. 05:55.080 --> 06:00.920 From a architectural perspective, it does look a bit like, if you are familiar with WebRTCS 06:00.920 --> 06:05.480 a fuse, it's basically the same kind of approach where you have a pub sub and a mechanism 06:05.480 --> 06:09.560 media that is being published and media that can be distributed accordingly. 06:09.560 --> 06:15.960 And all of that being very aware of quick as its foundations, so trying to take advantage 06:15.960 --> 06:20.520 of all the different things that it has and most importantly, being independent of video 06:20.520 --> 06:24.280 formats and media formats in general, and explain why it is in a second. 06:25.400 --> 06:30.200 And if we look at these graph for here, which is the graph that we usually pretty much show 06:30.200 --> 06:35.000 no in terms of depending on the latency that we are willing to accept will see the best protocol 06:35.000 --> 06:42.120 for the job, we know that higher latency is HLS dash, so usually what you go for, if you want 06:42.280 --> 06:47.160 really low latency, what back to C is going to lead the only solution, and so mock initially 06:47.160 --> 06:51.240 tried to cover as more gap in there, but they soon found out that they wanted something like 06:51.240 --> 06:56.440 these instead. So basically a single protocol to rule them all, the configurable latency that 06:56.440 --> 07:02.120 I mentioned initially, so I may want to watch something in real time, or with a 30 seconds delay, 07:02.120 --> 07:06.680 or maybe watch something from yesterday using the same protocol dynamically, basically. 07:07.400 --> 07:11.960 And for an architectural perspective, as I mentioned, again, very similar, if you're familiar 07:12.040 --> 07:17.000 with WebRTC, this will not surprise you, you have a publisher, pushing objects to a relay, 07:17.000 --> 07:23.880 and the relay will push it to all the subscribers that are interested in it, and architecture 07:23.880 --> 07:28.200 can be as complex as you won't, depending on for instance, if you're doing some sort of 07:28.200 --> 07:34.200 CDN distribution, it can get quite complex that everything is specified quite nicely in the draft. 07:34.200 --> 07:39.480 And an interesting property of mock is that, again, it is completely independent of formats 07:39.480 --> 07:43.960 of whatever. So with WebRTC, we have specific rules for how to read video, 07:43.960 --> 07:47.240 all those specific columns and so on, with mock, there is nothing like that. 07:47.240 --> 07:53.560 Everything is just an object. So we carry objects around objects of data, and what those objects 07:53.560 --> 07:59.640 are, is entirely up to the application, there are ways to basically signal it so that people are 07:59.640 --> 08:05.240 ready to process food is coming in. But in general, we have objects, we can group objects together, 08:05.480 --> 08:10.680 and with video frames, as it makes sense, if you start from a keyframe, and all the other objects 08:10.680 --> 08:15.880 are the differences from that keyframe, the group of pictures, basically, a sequence of group 08:15.880 --> 08:20.600 of objects is a track, which can be an other track, a video track, a subtitle track, whatever you 08:20.600 --> 08:25.960 won't kind of track. And there is much more to say about mock in general, but I don't have much 08:25.960 --> 08:30.440 time to wait all that for with a bit more on the, on some of the implementation aspects. And 08:30.440 --> 08:35.480 even though it is a new protocol and actually a very active one, changes are happening over, 08:36.200 --> 08:41.160 there are a lot of implementations out there in different languages by some very big companies, 08:41.160 --> 08:45.320 as well. A lot of companies are invested in that, and you can find some more information in 08:45.320 --> 08:50.680 their week if you want to know a bit more about those. And that person is started working on 08:50.680 --> 08:56.840 these since a year or more or less in a new library that I call Diane Quick, and if your 08:56.840 --> 09:04.680 career is about the name, you can ask me later. And basically the idea was to, exactly. And 09:04.680 --> 09:11.560 basically the idea was to write a new open source library. So unlike Janus, which is a service 09:11.560 --> 09:15.960 type component, I wasn't in something a bit more flexible this time, so I implemented these 09:15.960 --> 09:20.520 as a quick library so that I could implement different things on top of that. And it is the 09:20.520 --> 09:26.600 open source, you can find the repo there and an introductory blog post as well. With the idea that 09:26.600 --> 09:31.560 this library would be generic, so you can use it as a generic quick or web transport library, 09:31.560 --> 09:36.600 but I also wanted to build in support for those media protocols that we've seen before. So both 09:36.600 --> 09:42.920 RTP or Quick and Media or Quick mostly because those are the protocols that I wanted to experiment with. 09:43.480 --> 09:49.080 And so the idea was to provide higher level APIs to basically hide all the quick complexity, 09:49.160 --> 09:53.640 which may or may not be possible in the wrong run. I'm exactly not sure at the moment. 09:54.440 --> 10:00.600 And of course, I'm very far away from done. So it is very much a personal experiment so far, 10:00.600 --> 10:05.080 basically everything is written from scratch, even the quick start, which is probably a big mistake. 10:05.080 --> 10:10.920 And I'll explain later why, but it has been very interesting as a test by then sandbox to experiment 10:10.920 --> 10:15.560 with all these new protocols, with new ways of exchanging media and stuff like that. 10:16.520 --> 10:20.520 For instance, it allowed me to play a bit with web transport, seeing if I could, 10:20.520 --> 10:26.840 for instance, talk from a browser to an application using my library, just using web transfers, 10:26.840 --> 10:33.400 that way, experiment, a bit also, I put the library inside genres as an experiment so that I could 10:33.400 --> 10:38.280 get to a data channels to web transport and have them in chat with each other somehow. So just 10:38.280 --> 10:44.200 just to see if I could, not in groundbreaking in that, but it basically allowed me to do some kind 10:44.200 --> 10:49.240 of interesting experiments. And of course, I wanted to experiment with those media protocols as 10:49.240 --> 10:53.400 well. So I started with RTP or a quick because it was the easier of the two. Of course, I just 10:53.400 --> 11:01.080 needed to figure out the framing. And basically, I designed a higher level API where you just, 11:01.080 --> 11:06.600 basically, send and receive RTP objects using this API and don't really have to care about all 11:06.600 --> 11:12.280 the quick aspects in the background. This was the domain idea. And I used that to basically 11:12.280 --> 11:17.000 implement a basic server, a basic client, and a basic integration in genres so that I could, 11:17.000 --> 11:24.040 for instance, bridge RTP coming from quick to WebRTC and vice versa, just to see if it was possible. 11:24.040 --> 11:29.480 And again, I won't bother you too much with all the different experiments that I did there, 11:29.480 --> 11:34.280 also because it's just images as demos, they don't make much sense. This, for instance, was just 11:34.280 --> 11:40.600 the way where I could, basically, show of RTP or have RTP or a quick on one side and then show 11:40.600 --> 11:46.760 it as WebRTC on the WebRTC side or vice versa. Or even at WebRTC and WebRTC on both sides, 11:46.760 --> 11:52.120 and an RTP or a quick tunnel in the middle, which is probably the only place I see RTP over 11:52.120 --> 11:56.440 quick adding some kind of a success because it allows you to multiply some multiple RTP sessions 11:56.440 --> 12:01.640 over the same connection. It makes sense for tracking for boy P, for instance, or something like this. 12:02.200 --> 12:06.280 But again, as I mentioned, RTP or a quick is not really that much of interest because 12:07.080 --> 12:11.240 even the seeped people don't seem to be very interested in that, at least to the moment, 12:11.240 --> 12:15.720 at least for the moment. And as I mentioned, media or a quick is really gaining the attention 12:15.720 --> 12:20.840 in the sunlightization bodies. And so I was very much interested in that as well, coming from WebRTC 12:20.840 --> 12:24.840 myself. And so I started doing pretty much the same kind of effort there. So again, 12:25.480 --> 12:31.720 exposing APIs that map to the protocol semantics. It's a ways to publish, with subscribe, 12:31.720 --> 12:37.000 wait to advertise your presence and so on. And then the library itself hides all the quick 12:38.280 --> 12:42.040 quick stuff that is needed in order to make that happen basically in a nutshell. 12:43.320 --> 12:49.160 And using that functionality, I implanted a few sample applications like a basic publisher, a basic 12:49.160 --> 12:55.560 subscriber, a proof of concept relay that I want to expand. I want to build a stronger relay for 12:55.560 --> 13:00.840 the job. So something that could be seen as the mock Janus Equivalent if you will. And a few different 13:00.920 --> 13:07.560 ways, including a basic integration within Janus itself that I'll talk about in a few minutes. 13:08.760 --> 13:14.280 And again, I wanted to think that is important to mention, especially if you at the end of 13:14.280 --> 13:18.280 presentation you'll be interested in media or quick in general or this implementation. 13:19.000 --> 13:24.120 Basically, all the participants in both the specification and the different implementations 13:24.120 --> 13:30.120 are in a dedicated Slack channel that is the mock channel on the quick dev Slack group. So quick dev 13:30.200 --> 13:35.800 is the Slack community that hosts everything related to quick development and the mock channel 13:35.800 --> 13:40.760 is where the mock activity is typically happening. And this is where all the implementers also 13:41.560 --> 13:45.800 engage with each other to organize inter-obsessions and stuff like this or to figure out 13:46.520 --> 13:52.440 new updates to the specification for instance. And basically, I did a lot of different tests to 13:52.440 --> 13:58.120 to do different things. Again, images that don't make much sense without context. There are some 13:58.120 --> 14:02.600 links to different blog posts where I go much more in detail if you're interested. I'll go 14:02.600 --> 14:08.200 quickly to those because I want to show something in particular. There is one really point 14:08.200 --> 14:13.720 in particular that is interesting because meta was being very active in the mock specification. 14:13.720 --> 14:19.400 As basically provided, as an open source repository is the ways to have both a mock publisher 14:19.400 --> 14:24.600 and a mock subscriber within a browser, which means they implemented all that media stack that I mentioned, 14:24.680 --> 14:30.440 all the web code exploration. The mock stack has part of a browser itself, which means that 14:30.440 --> 14:36.440 from a browser you can create a mock publisher, connect to a relay and a subscriber can receive 14:36.440 --> 14:42.760 that media. And I mean, this is pretty much the same as a million different web artists see 14:42.760 --> 14:50.760 them as that we use ourselves for many times. And so this encouraged me to try and check whether or 14:50.760 --> 14:55.800 not I could indeed bridge mock and web artists see to each other somehow. So to see if there 14:55.800 --> 15:00.040 was some meta ground, how much effort it would be to translate between the two. 15:00.760 --> 15:05.560 And basically the process that I followed was basically since I am quick, my implementation is a 15:05.560 --> 15:11.560 library. I can use it within genres somehow. So I created a new plugin in genres that would allow 15:11.560 --> 15:17.800 me to refer to I am quick and implement the mock parts, so both the publisher and the subscriber. 15:17.800 --> 15:23.320 And so starting from the publisher it meant that I could use meta's demo as a publisher to send 15:23.320 --> 15:28.680 media to a relay, which could be very well my own relay for that matter. And then within genres 15:28.680 --> 15:34.040 I can subscribe to the mock objects and then figure out how to translate them so that whoever 15:34.040 --> 15:39.240 to see users can consume them. And it turns out there were basically three different things that I 15:39.240 --> 15:46.040 need to do. First of all, in terms of protocol, I mentioned how mock is media independent and whatever, 15:46.120 --> 15:51.480 but for audio and video, in particular, they are using a specific format that is called the 15:51.480 --> 15:56.840 lower ed container lock for short that is basically, you could see it as basically the equivalent 15:56.840 --> 16:02.920 of RTP provides some timing information and some basic information that tell you what the object is. 16:03.720 --> 16:10.360 So first of all, I need to translate the lock timing to the RTP timing, then with mock every object 16:10.360 --> 16:14.760 in for video is a video frame, which means I can get a very big video frame and I need to 16:14.760 --> 16:20.840 chop it up when I send it over RTP. That's another thing that I had to do. So split in multiple RTP 16:20.840 --> 16:27.640 packets and then for video itself, web code accidentally do a BCC, RTP only does an XB. I had to 16:27.640 --> 16:32.680 basically translate the way they are talking niche to 64 in there, which doesn't mean transcoding, 16:32.680 --> 16:37.720 it just means replacing some bits here and there as you know, but that's basically what I needed to 16:37.720 --> 16:43.320 do. And that allowed me to have a working mock publisher to about the CB work kind of thing 16:43.960 --> 16:49.080 to do the other way around. I basically had to do the reverse. A web RTC publisher will send video frames 16:49.080 --> 16:55.000 over multiple RTP packets. I need to reconstruct the video frame out of that. I need to do the same 16:55.000 --> 17:00.760 thing, replace an XB with a BCC because that's what web code X want. And I need to translate 17:00.760 --> 17:06.200 whatever timing information is in RTP to the lock format and then once I did that, everything 17:06.200 --> 17:15.240 would work in in mock as well. Which means that I could do something like like this. So basically 17:16.280 --> 17:22.040 web RTC mock publisher and web RTC viewer or vice versa. Even though of course I mean this 17:22.040 --> 17:26.840 were a nice experiment, but I wouldn't want to do something closer to what Ali beg and did here. So 17:26.840 --> 17:33.640 it basically explained how many demos recently showing how you can do a live streaming of a sports 17:33.720 --> 17:38.840 game. In this case, a basketball game using media over quick using dynamic latency and so on. 17:38.840 --> 17:43.240 And using that timeline down there, you can also go back and watch a highlight that you missed 17:43.240 --> 17:48.520 for instance all over the same connection, which is kind of cool. It could be a game changer for 17:48.520 --> 17:53.480 the for this specific scenario for instance. And since I am a football fan and most importantly, 17:53.480 --> 17:58.440 I am an apoly fan, I want to do exactly the same thing, but with things that I like. So I don't know how 17:58.520 --> 18:03.800 far I am, but hopefully we'll get there. Which brings us to, I don't know if I'm right on time, 18:03.800 --> 18:10.280 or time's up. So I'll summarize briefly briefly briefly. The quicksack is something that I 18:10.280 --> 18:14.600 want to replace. I wrote it from scratch. It works fine on my laptop. It probably works 18:14.600 --> 18:19.160 awful when I want to do something more interesting. So I'll use NGTCP to in the future, 18:19.160 --> 18:23.800 something that I want to do. And done it yet. I need to keep up with the specification. It's very 18:23.800 --> 18:29.400 active. I want to work on media, football, not only we are there. And of course, I brought these 18:29.400 --> 18:33.240 to your attention. So first of all, to make you aware of the market, if you are in Torrey, 18:33.240 --> 18:38.440 I know also about the library if you are curious about that. And this brings this is the end. 18:38.440 --> 18:43.480 And if you are here tomorrow as well, I'll be managing the music production in the room. Finally, 18:43.480 --> 18:47.880 we have music. Open media was very nice to host some music presentations in the past, so maybe 18:47.880 --> 18:58.280 we may be interested as well. So thank you. I don't know if you have time for questions, 18:58.280 --> 19:05.240 or if we have to look at it short. Okay. I don't know if there is anybody who has questions, 19:05.240 --> 19:09.320 or do you have questions? So perhaps you said this, but you have to have some of my 19:09.320 --> 19:16.120 questions, so I'm sorry you haven't. What's the relationship between Microsoft and Web Transport? 19:16.120 --> 19:20.200 Yeah. So the question is, what's the relation between mock and web transport? And basically, 19:20.200 --> 19:27.800 web transport is basically really to keep it short. It's basically the equivalent of 19:27.800 --> 19:34.200 what web socket is to TCP and HTTP web transport is to quick. So basically, you have quick. And then 19:34.280 --> 19:39.400 out of quick, you can create a web transport session, which has custom APIs that you create 19:39.400 --> 19:46.040 out of an HTTP 3 session. So you create an HTTP 3 session. You use connect in HTTP to say, 19:46.040 --> 19:50.200 I want a web transport session. And after that, it's basically the same as quick, 19:50.200 --> 19:55.640 we just very few information for framing purposes for the streams and so on. And that gives you 19:55.640 --> 20:02.360 access direct access to quick in the browser. And then mock is just the application layer on top of 20:02.520 --> 20:07.320 that. So you can either use mock on real quick, or you can use mock on top of web transport. 20:09.320 --> 20:14.040 I think the, yeah. What is this, as you see, see, do you think you mentioned that, yeah, 20:15.400 --> 20:24.840 sorry, which, uh, you're talking about something with, uh, quick, NG TCP 2. Yeah, NG TCP 2 is the 20:24.840 --> 20:30.840 name of one of the libraries that provide you, uh, direct quick functionality. So it's one of the 20:30.840 --> 20:35.800 libraries that you, that you use if you want to have quick support in your application. And so it's 20:35.800 --> 20:42.440 a lower level implementation of the protocol itself. And since I have, I implemented quick on my own way 20:43.240 --> 20:47.720 for my stack because it was also learning process. I want to replace the quick part and put 20:47.720 --> 20:53.560 NG TCP there. And then keep all the application level stuff that I implemented, including the 20:53.560 --> 20:59.240 RTP or quick and media or quick integration on top of this new layer. So this will just replace the 20:59.240 --> 21:05.800 quick part. And then, the last thing is, yeah, if they heard that I still call it media over 21:05.800 --> 21:11.080 quick, they would probably shout at me and throw things at me. They don't like the media part anymore. 21:11.080 --> 21:17.480 So they, uh, now, they don't like the media part anymore. So they, uh, now, they don't 21:18.040 --> 21:24.120 use the network. I mean, yeah, if they heard that I still call it media over quick, they would probably 21:24.120 --> 21:28.520 shout at me and throw things at me. They don't like the media part anymore. So they, 21:28.520 --> 21:34.600 now officially mock it's just an acronym. So it's already happening that basically whatever you put on 21:34.600 --> 21:39.480 mock is up to you. There are ways, there are drafts that specify how you can 21:39.480 --> 21:46.440 containerize specific hand of media, but that will be completely, uh, orthogonal to the 21:46.520 --> 21:51.880 specification of the protocol. The protocol will always remain generic enough. So, and I'm very late. 21:51.880 --> 21:53.880 So sorry again. Thank you.