WEBVTT 00:00.000 --> 00:16.680 Hello everyone, thank you for joining. 00:16.680 --> 00:22.240 My name is Pierre Rattimerik, I'm a PhD candidate at Fusek at Fry University at Amsterdam 00:22.240 --> 00:27.480 and I'm in the Security Researcher mainly specializing in UEFI PCI Express and Hypervise 00:27.480 --> 00:28.480 Security. 00:28.480 --> 00:35.680 My previous work involves Intel Thunderbolt Fuller Ability Research and this is my colleague 00:35.680 --> 00:36.680 Sina. 00:36.680 --> 00:43.680 Hello everyone, I'm Sina, I'm also a PhD candidate at Fry University at Amsterdam. 00:43.680 --> 00:50.040 I usually spend my time on system programming, maintaining Hyperdvise and like Windows 00:50.040 --> 00:55.840 internals and a couple of digital designs, you can see more of my works here at my 00:55.840 --> 01:00.920 block. 01:00.920 --> 01:04.600 So we're giving two talks today at Fusek. 01:04.600 --> 01:12.360 The current one is an introduction to Hyperdvise where we go into details on what 01:12.360 --> 01:16.360 it can offer you but also how it works on the DeHood. 01:16.440 --> 01:21.480 This morning we gave a talk in the Security Track where we discussed a case study of 01:21.480 --> 01:24.880 using Hyperdvise for malware reversing. 01:24.880 --> 01:28.720 If you missed that one, no worries, that will be recording. 01:28.720 --> 01:34.760 Yeah, so let's talk about Hyperdvise. 01:34.760 --> 01:39.800 We are currently sitting in the virtualization track but we're talking about B-Buggers, so 01:39.800 --> 01:41.400 what's up with that? 01:41.440 --> 01:48.200 So actually we're going to talk about both Hyperdvise and DeHood Buggers. 01:48.200 --> 01:51.040 So why would you want a Hyperdvise or a B-Bugger? 01:51.040 --> 01:54.840 Well, basically a Hyperdvise is highly preferred. 01:54.840 --> 02:00.160 It has complete system visibility and that means that it has visibility over nearly 02:00.160 --> 02:04.840 all events that are occurring in the operating system. 02:04.840 --> 02:11.000 At the same time, because it is operating at the Hyperdvise level, it is also virtually 02:11.040 --> 02:14.720 completely transparent to both user and kernel space. 02:14.720 --> 02:21.000 In this case all sources of advantages, one being stealthy. 02:21.000 --> 02:28.200 This is of course very useful when the bucking software that behaves differently when running 02:28.200 --> 02:33.160 in a virtualized environment or when being debugged. 02:33.400 --> 02:40.920 Yeah, being a hypervisor means we also have access to a toolbox that offers features 02:40.920 --> 02:46.120 that are normally not possible with traditional debuggers. 02:46.120 --> 02:54.280 So introducing Hyperdvise, it's a hypervisor, assisted debugger, leverages virtualization 02:54.280 --> 02:59.800 extensions offered by the Intel ISA, originally intended for virtualization, but we're using 03:00.040 --> 03:02.680 them to implement a debugger. 03:02.680 --> 03:10.120 And because we are operating at the Hyperdvise level, we operate also independently of 03:10.120 --> 03:14.240 any operating system level debugging APIs. 03:14.240 --> 03:21.200 This gives all sorts of advantages and we'll be talking about those in a more detail. 03:21.200 --> 03:28.240 The first release was for Windows back in 2022, actively maintained since and we're currently 03:28.320 --> 03:33.360 working on a UEFI based, always acknowledged a hypervisor agent, which means that soon we 03:33.360 --> 03:41.920 will be able to support Linux, BSD, and basically any other operating system. 03:41.920 --> 03:47.600 So yeah, let's talk about some debugging scenarios, like if you have to debug native code, 03:47.600 --> 03:53.360 especially when you don't have access to the source code, this can be really difficult. 03:53.440 --> 04:00.800 For example, debugging a device driver, you're debugging the device driver, or you're 04:00.800 --> 04:07.440 actually looking for a device driver that is touching a certain area in the memory, and you 04:07.440 --> 04:10.320 don't know which device driver it is. 04:10.320 --> 04:16.880 Well, this is exactly one of the use cases that Hyperdvise is useful for. 04:16.880 --> 04:24.640 So you try to find what device driver, or what user space program is writing into the 04:24.640 --> 04:28.480 certain memory range, it's no problem, you can find that out. 04:28.480 --> 04:34.480 When you have finally found what device driver is writing into that certain memory range, 04:34.480 --> 04:40.640 you might want to reverse trace the stack all the way up to user space. 04:40.640 --> 04:45.240 You know, maybe there's a user space companion app talking to the device driver. 04:45.320 --> 04:47.640 This is also possible with Hyperdvgy. 04:47.640 --> 04:50.360 Maybe you want to script all of this, right? 04:50.360 --> 04:57.240 Turn all of these data points into events that triggers certain scripts and does something 04:58.440 --> 04:59.000 interesting. 05:01.720 --> 05:08.840 Or maybe you want to prevent user kernel space, touching a range of memory. 05:09.800 --> 05:16.520 And what the list goes on and goes on, these are all basically use cases that are not 05:16.520 --> 05:21.720 possible with a traditional but debugger, and we'll show you how this can be done with Hyperdvgy. 05:23.640 --> 05:31.560 So Hyperdvgy to the rescue, but before we do that, we have to introduce some new terms. 05:31.960 --> 05:38.840 So Hyperdvgy implements a concept, but we like to call event-driven debugging. 05:39.480 --> 05:46.200 So everything that happens in Hyperdvgy is basically an event, and each event could trigger 05:46.200 --> 05:52.680 one or more actions. So it could be the execution of some script, it could be the execution 05:52.680 --> 06:01.080 of a piece of assembly code that you wrote beforehand, and you could simply trigger a breakpoint. 06:02.440 --> 06:08.360 And the inputs for these events can be anything from system calls or returns from system calls to 06:08.360 --> 06:16.600 EPT hooks. What that is, we'll dive into in a little bit. It can also be IO operations, and it 06:16.600 --> 06:23.800 can be even be specific CPU instructions. Like you know, that a certain piece of software, 06:24.440 --> 06:32.920 either user or kernel code, uses these instructions. You can actually trigger an event based 06:32.920 --> 06:43.960 on the use of this only instruction. So there are event calling stages, like post, pre, and both. 06:45.240 --> 06:51.160 You can also do what we call like to call event short circuiting, which basically means you 06:51.160 --> 06:57.640 define a set of conditions under which the event must be triggered. And then you can simply 06:58.280 --> 07:04.920 ignore the entire event. This is how you can ignore read or write actions to a certain 07:04.920 --> 07:09.240 memory region, for example. You can also completely bypass events, which means that 07:10.520 --> 07:18.840 a certain range of instructions will be skipped. And we implement all of these using PMXids, but we'll 07:18.840 --> 07:26.600 talk about that. Yeah, and so let's dive a little bit deeper into how we implement it all of 07:26.600 --> 07:37.720 this. No, we're going to talk about the techniques, the way that we implement these features 07:37.720 --> 07:44.280 in HyperDVG and how they can help us to enhance or reverse engineering experience. 07:45.240 --> 07:54.040 The first, like, feature that we offer here is by using a monitor trap flag. If you are familiar 07:54.040 --> 08:04.120 with virtualization of Intel VTX, there is like a VMCS and on top of that there is an MTF flag. 08:05.080 --> 08:15.800 We made like a function tracking mechanism here that you can use to go from the user mode to 08:15.800 --> 08:23.480 kernel mode and directly back from the kernel mode to user mode by employing MTF. And once it's combined 08:23.480 --> 08:30.840 with symbols server, let's say Microsoft symbol server or a private symbol server, PDB files, 08:31.800 --> 08:36.600 you can just see the name of the functions, the call, three of everything that is called. 08:36.600 --> 08:43.240 And HyperDVG is capable of running from user mode to kernel mode and coming back from the 08:43.240 --> 08:48.280 kernel to the user. This is something that traditional devices are not capable of doing that. 08:51.720 --> 08:59.080 So in another scenario, we are trying to combine what we introduced before about even calling 08:59.160 --> 09:08.360 stages and even short circuiting. There are three EScripts, there are three HyperDVG EScripts. 09:08.360 --> 09:16.840 All you can see here is from the language of HyperDVG, we call DSLank. It's a customized language, 09:16.840 --> 09:23.240 which is so similar to WindyVG EScripting language, but with a little bit of difference. So 09:23.800 --> 09:32.600 this is the way that we create an event in HyperDVG. As you can see, we are trying to bypass the 09:32.600 --> 09:40.600 execution of certain instructions. So first we go for MSORI, which is basically the WR MSORI, 09:40.600 --> 09:49.080 if you're familiar with the model specific registers in Intel. And here we set like in the if you see 09:49.160 --> 09:55.640 the stages here, we defined two stages as a pre-estate and post-estate. 09:56.440 --> 10:05.320 And because the first example is a pre-estate, so the WR MSORI instruction is not yet executed. 10:05.320 --> 10:12.440 And as a result, we could change the values that the WR MSORI instruction wants to put inside 10:13.400 --> 10:21.160 like once to run on a bare metal machine. So they are all executing before the emulation happens 10:21.160 --> 10:29.400 in the system. Another example is exceptions, you know, there are plenty of like in a Hypervisor, 10:29.400 --> 10:36.440 there are exceptions, there are introps, there are traps. All of them can be monitored and can be 10:37.000 --> 10:44.760 emulated by HyperDVG. So in this example, we use a post-calling stage here. And as a result, 10:45.640 --> 10:55.160 like 0XE is a page fault. And based on Intel processors, there is like a location where the 10:55.160 --> 11:01.160 page fault happened in CR2, which is there. So because we are using the post-estate, we could see 11:01.800 --> 11:13.800 where this page fault happens and see the address. Another example is about changing system 11:13.800 --> 11:20.840 calls. So we have two examples here. For the first example is that we are trying to intercept 11:20.840 --> 11:27.160 the SQL instruction. And for the SQL instruction, first we check whether the SQL number, 11:27.160 --> 11:33.320 which is always located in both Linux and Windows, all located on the first EA 3G server, 11:33.320 --> 11:42.600 we check it with some values. Let's say 0XE55. And if it was like if the SQL number is 55, 11:42.600 --> 11:50.760 then we change one of the parameters. Like if assuming that it's a SQL, it's a fast-go-calling 11:50.760 --> 11:59.640 convention, we change some bits in ECX register or first parameter. And as you can see, 11:59.640 --> 12:10.120 there's also another example here that we call it as the previous stage. So we have the capability 12:10.120 --> 12:17.400 to bypass the system call. What happens here is that HyperDVG before executing and before emulating 12:17.400 --> 12:26.440 the SQL instruction, checks for some conditions. And if this condition is like if the condition 12:26.440 --> 12:34.600 met, then this event is short-circuited, which means that the SQL instruction is never emulated 12:34.600 --> 12:42.360 and never executed in the virtualized environment. And what we do here, we change the RE, 12:42.360 --> 12:48.600 searches there to put some status here. So the application, the debug application, 12:48.600 --> 12:57.160 things that like the SQL instruction is successfully executed. And the value in REX, 12:57.160 --> 13:02.840 which probably refers to like access denied file, is not able to be accessed, 13:03.800 --> 13:10.040 is returned from the system call. But in reality, there is no SQL and nothing is executed in the 13:10.040 --> 13:17.480 system. So this is how you can use short-circuiting. And you know, like you can also combine short-circuiting 13:17.480 --> 13:24.680 with all of the events in HyperDVG. Let's say for events that are related to WDRMSR, 13:24.680 --> 13:34.840 RGMSR, CPUID, TSC, like RDSC and RDSCP or RDPMC, all of the events that are available in HyperDVG 13:34.840 --> 13:45.080 could be short-circuited. And another interesting example here is that how we can ignore 13:45.800 --> 13:54.840 certain memory rights. Here by implementing these views, the EPD, second-level page table, 13:55.560 --> 14:05.560 implementation of Intel. And as you can see, we put like a hook inside a certain address. 14:05.560 --> 14:11.800 Let's say it's a variable inside a user mode application, or it could be also in a kernel mode, 14:12.440 --> 14:23.720 like driver. And we check for all of the modifications like any memory rights into this address, 14:23.880 --> 14:31.640 or memory reads into these address. So if there are, because it's W here, it means that 14:31.640 --> 14:37.960 we want to intercept and trigger some events for memory rights. And we check for this 14:37.960 --> 14:45.000 stage. So if the memory is equal to something like let's say a specific value, 14:45.880 --> 14:54.200 then we could easily change that value as if nothing happens. And like we completely change the 14:54.200 --> 15:02.040 value of the memory. How it could be useful as soon that you have a program that has different 15:02.040 --> 15:09.160 threads, different threads that are accessing a specific global variable. And you just want to 15:09.160 --> 15:16.200 make sure that this specific global variable is never changed to a certain value. Using this 15:16.200 --> 15:23.320 as script, you can short-steer kit, or you can see or modify or prevent any modification of 15:24.520 --> 15:31.880 the target memory. So what's make these things possible here? 15:31.880 --> 15:44.360 We extensively use the MVAC or mode-based execution controls, which are available on Intel 15:44.360 --> 15:53.240 processors for VTX. We have a mechanism in which we like allocate and use three different 15:53.240 --> 16:03.720 EPTPs, so different EPTP pointers. And one of these EPTPs are a normal EPTP, and the 16:03.720 --> 16:09.640 other EPTP is like a user mode execution denied, EPTP, and the other one is like 16:09.640 --> 16:18.200 kernel mode execution denied, EPTP. Regarding MVAC, MVCC is introduced in Intel processors, 16:18.200 --> 16:27.000 starting from KBLAG processors, I think it's seventh generation. And whenever we reach to our 16:27.000 --> 16:36.280 target process, we just like detect whether we want one of these user denied or kernel denied 16:36.280 --> 16:48.760 EPTPs in VMCS. So what we have here is that we combine or detect mode changing detection, 16:48.760 --> 16:56.840 mechanism with some other features of Intel, which is called move to CR3 exiting. The thing is, 16:56.840 --> 17:03.640 whenever the CR3 register is changed, we are sure that some context switches happens in the process, 17:03.640 --> 17:12.120 and based on that, if we are interested on that process, we will load one of these three EPTPs. 17:12.120 --> 17:20.600 So if the process is interesting for us, then we load one EPTP that detects, for example, the execution 17:20.680 --> 17:36.920 of user mode. How using this approach using MVAC makes us capable of freezing the execution 17:36.920 --> 17:46.600 of applications, I assume that we are targeting a specific process, and we are loading EPTPs for 17:46.600 --> 17:54.840 that specific process that doesn't let it execute any user mode code. So what will happen here 17:54.840 --> 18:01.480 is that the operating systems context switches to this specific process, and we only understand it 18:01.480 --> 18:09.720 by intercepting all of the VMAs that are related to move to CR3 exiting. And in the context 18:09.720 --> 18:17.000 switch, we just load the EPTP that denies the execution of user mode code. So what happens is that 18:17.000 --> 18:25.080 at some point windows or the operating system tries to execute some user mode codes, and what will 18:25.080 --> 18:33.880 happen is that a processor creates some EPTP violation of VMAs, and in EPTP violation VMAs, 18:33.880 --> 18:41.560 it will just basically ignore it, as if we don't let the application to run. And at some point 18:42.760 --> 18:50.600 the clock intrap comes and changes the process to another process. So what happens is that what 18:50.600 --> 18:57.160 happens here is that the operating system thinks that the user mode code is running normally, 18:57.160 --> 19:03.800 but in reality we just prevent any execution of user mode code. And this is the way that we implement 19:03.800 --> 19:13.240 the time-freeze debugging for the user mode applications in HyperDouge. Here for the conclusion, 19:14.920 --> 19:23.080 like HyperDouge has two different mechanisms, both for the debugging user mode and kernel mode. 19:23.960 --> 19:35.480 So you can just debug an entire operating system with it. And it also leverages the modern hardware 19:35.480 --> 19:45.800 features that gives it a system wide visibility. So it provides some powerful features 19:46.520 --> 19:52.360 that are designed for reverse engineering, and these features are simply not possible with regular 19:53.080 --> 20:01.080 debuggers, traditional debuggers, so you need a hypervisor to have those reverse engineering techniques. 20:01.960 --> 20:11.240 And of course HyperDouge is a free and open source project, so it's available for the community 20:11.240 --> 20:18.440 to contribute and has and patches are always welcome. Yes, that's it. 20:23.080 --> 20:37.080 This is the on the low level instrumentation that you just do. Do you have any higher level 20:37.080 --> 20:43.080 utilities to provide a bit more for example, OS awareness? Like I remember looking into it, 20:43.080 --> 20:51.080 open VMI previously. So you mean the difference is that HyperDouge and open VMI? 20:51.080 --> 21:09.960 Yes, maybe you can put it in context. Yes. Yes. Yes. Yes. So the question is what higher level 21:09.960 --> 21:17.640 features we offer for like seeing some like higher level operating system content? 21:18.600 --> 21:28.120 We do have like a specific script engine that is written for HyperDouge. And this 21:28.120 --> 21:35.000 script engine has a lot of like it adverse about some like operating system concepts like 21:35.000 --> 21:43.800 thread ID process ID and it also has like it has access to call certain let's say functions 21:43.800 --> 21:51.160 from the Windows kernel or it is it has completely a complete interaction with the operating system. 21:51.160 --> 21:58.280 So it's the higher level of debugging. Yeah, I would like to add that you can also use the Microsoft 21:58.280 --> 22:02.920 Symbol server or your own private symbols to give context to the coach you're looking at. 22:02.920 --> 22:13.160 But currently only because yes. No. Yes. So we're working on a UEFI based agent. So basically that 22:13.160 --> 22:21.000 means our hypervisor will run at boot time and that way we can support Linux and be as the 22:21.000 --> 22:27.800 and all the other operating systems. But it's a work in progress. So yeah, if you like helping us out, 22:27.960 --> 22:36.520 please do. And also maybe it's good to add that in HyperDouge like the way that we designed 22:36.520 --> 22:43.400 this script engine is completely different from the Windows G and we published an academic paper 22:43.400 --> 22:49.640 out of it. And if you see that academic paper, it offers like more than 1,000, it is more than 22:49.640 --> 22:53.480 1,000 times faster than Windows G because of its design. 22:54.120 --> 22:58.600 Another question? Yeah. 22:58.600 --> 23:04.680 To expand on that. So when you say this is working progress, are you like working to have 23:04.680 --> 23:08.680 second run, like kind of and use a specific operating system? 23:10.680 --> 23:17.560 Question 3. So the question is, can you? Are you going to implement kind of and use a specific 23:17.560 --> 23:21.160 and running for all of the different operating systems you mentioned, like how are you going to 23:21.160 --> 23:27.720 use a similar and understand what is going to happen? Yes. So the question is, if we are going to 23:27.720 --> 23:32.920 support and add the support for all of the operating systems. Am I right? 23:32.920 --> 23:42.760 I'm sorry. I'm sorry. So yes, actually the thing is like there are there are 13 things in HyperDouge 23:42.760 --> 23:48.760 that are not related to the platform. And there are something that are related to the platform. 23:48.760 --> 23:56.360 For example, we are using for now, we are using Intel VTX. So we have to execute those 23:56.360 --> 24:07.000 instructions that are related to the VTX or VTD. But the thing is, like there are also like 24:07.720 --> 24:16.120 like we have also used some mechanisms that are only available on Windows. Let's say IRQLs. 24:17.080 --> 24:23.160 But the thing is, all of them can be applied to other operating systems as well. Like for example, 24:23.160 --> 24:30.120 for IRQL, there is nothing like this in Linux. But we are trying to implement those specific 24:30.120 --> 24:34.280 features for other operating systems as well. Yes. 24:37.000 --> 24:42.040 So do you have for instance something where I can say I want to figure out every open system 24:42.040 --> 24:47.240 about a Linux, like you have this information about what the system for Linux is? 24:47.240 --> 24:53.240 So the question is, if we have like information about what the open systems also are in Linux. 24:55.000 --> 25:01.320 Right now, I mean, what we did for implementing system calls can be applied also in Linux. 25:01.400 --> 25:07.800 Like we are all of the events in hardware, the VG are using hypervisor capabilities. So there is 25:09.800 --> 25:17.000 no, like it doesn't relate to Windows. But I mean, the user should also know about the 25:17.000 --> 25:23.880 way that the system calls are implemented in Linux. For example, assume that Windows uses 25:23.880 --> 25:30.600 a specific calling convention. Let's say fast call. And in Linux, the user should also know 25:30.680 --> 25:36.120 about the fast call and the difference between fast calling and in Linux and Windows and 25:37.400 --> 25:47.160 trace those systems calls. No, no, right, right now, HyperDVG only understands Windows. 25:49.560 --> 25:55.080 So you mentioned your implement at the time, like, defined or in the 25:55.080 --> 26:03.640 socialization process of Linux. How stealthy is that? Because as far as I understand, like, 26:03.640 --> 26:09.960 the time, like, you're not diversity in kernel space, right? So if a kernel space 26:11.640 --> 26:15.320 malware would monitor CIS calls, would they be able to catch it? 26:17.960 --> 26:23.720 So the question is, how still see is the time freezing debugging inside 26:24.680 --> 26:31.160 like Windows? Because we are only time freezing user mode applications. You want to know 26:31.160 --> 26:38.440 whether like a kernel malware could bypass this or not. The thing is, in HyperDVG, there are two 26:38.440 --> 26:46.760 like modes of debugging. Whether you can debug the kernel, entirely the kernel, and pause everything, 26:46.760 --> 26:52.280 like pause the operating system, and nothing changes in the operating system. And the other thing 26:52.360 --> 27:00.680 is the user mode code is like the user mode debugging is a separate thing. And the thing is, 27:00.680 --> 27:06.280 like, if there is a malware that could run into the kernel, then there are like some limitations 27:06.280 --> 27:10.840 for those malware. Like, for example, in Windows there is a driver signature enforcement. So 27:11.320 --> 27:17.720 they have to sign it their driver, and there are also other ways of, like, for example, 27:17.800 --> 27:24.520 who can those functions that are designed to load certain drivers, so we can just catch them from there. 27:25.800 --> 27:31.320 Yes? I mean, I think that's a convenient way to figure out if you're running an VM, right? 27:31.320 --> 27:35.800 Like, see, if you're ID, like, second layer, every sensation is only, you can't really hide all of those, right? 27:35.800 --> 27:43.720 Yeah, so the question is, we can hide all of those footprints or not of the hypervisor. The thing is, 27:43.720 --> 27:51.720 or first talk, in this morning, in the security track was exactly about this thing, 27:51.720 --> 28:00.360 we cannot guarantee 100% transparency, but we, like, raise the bar, a lot, like hyperdvg, 28:00.360 --> 28:08.360 even though by its nature is more steals, because it simply doesn't use API that Windows API 28:08.440 --> 28:13.800 that has designed for debugging, but at the same time, we also try to mitigate those footprints 28:13.800 --> 28:21.160 that are for hyperdvg itself. So I let me quickly pull up a slide from this morning that 28:21.160 --> 28:36.840 exactly answers your question one second. So this is the roadmap that we have for 28:39.000 --> 28:44.760 mitigating all of the artifacts that hypervisors, you know, are cells, but also to top left, left 28:44.760 --> 28:55.000 will hypervisor could reveal to malware. We have the two on the left, we have implemented and we're 28:55.000 --> 29:01.960 working on the ones in the middle there about 75% done, and the remaining ones, yeah, there's 29:01.960 --> 29:14.280 scheduled right there. Do we have time for more questions? Sorry. Sorry, can you speak up? 29:20.680 --> 29:26.040 Yes, well, we don't currently don't have plans for that, but it is certainly possible to implement 29:26.120 --> 29:34.840 the same on Arm and AMD, yes. Yeah, go ahead. 29:39.080 --> 29:52.280 As a member, you know, we have some commands, like, extension commands for SMIs, and we kind of 29:52.280 --> 30:01.560 could, like, monitor certain things for the SMI handlers, for example, if I don't know an attacker or 30:01.560 --> 30:08.360 any other application or the windows itself tries to call and tries to trigger some SMIs, 30:08.360 --> 30:15.640 by using IO port certainly, you can use the features in hyperdvg to detect that, and we are also 30:15.640 --> 30:24.200 having some commands for triggering SMIs, like, you just trigger the SMIs by using Hyperdvg, 30:24.200 --> 30:30.200 if you check the document, there is the certain commands for SMI handlers, but in general, 30:31.560 --> 30:37.080 everything is inside the VTX, inside the ring minus two hypervisor, not in SMM. 30:38.040 --> 30:43.400 So you also kind of, I can't write a hook that would catch something, and it's actually not inside 30:43.400 --> 30:49.080 SMI code. No, no, I mean, this is not possible in interpositive sort, like, you have to have, 30:49.080 --> 30:55.880 like, access to the UEFI framework that loads the SMI handler and your code should be in SMM, 30:55.880 --> 30:57.640 so this is technically not possible. 30:58.040 --> 31:10.040 This is actually part of our effort to port hyperdvg to UEFI. We are currently not sure yet, 31:10.040 --> 31:16.680 but it's feasible, but since we will be running from UEFI, we might be able to actually 31:16.680 --> 31:23.880 intercept anything that's happening in SMM. But again, like, running into UEFI's, like, 31:24.840 --> 31:31.800 you need to have, like, some access to the SPI chip, because, like, the SMI handler is loaded before the UEFI application. 31:37.800 --> 31:46.360 Is that QMU? I mean, what we could do for the QMU is that we could 31:47.320 --> 31:53.160 use some of their commands, like, some of their codes for emulating certain instructions. 31:53.960 --> 32:02.600 But here, like, what we do is that we execute, like, we run on bare metal, so we don't have the plan to. 32:03.240 --> 32:18.440 Yeah, yeah, I mean, like, right now, hyperdvg is supported on the VMware Workstation's nested virtualization 32:18.440 --> 32:29.640 environment, and KVM, like, nested virtualization environment. It's not, it doesn't support hyperv, 32:29.640 --> 32:36.280 we did, like, we tried a lot, but unfortunately, we couldn't port it to the hyperv, yet, but 32:36.280 --> 32:49.960 right now, I think KVM is supported. Yeah, so the overhead comparison, normal hypervisors, 32:50.760 --> 32:58.760 I mean, this is, like, the way that we emulated is exactly the same as the way that you emulated 33:00.600 --> 33:08.200 like, like, let's say, like, I mean, once we execute those instructions, 33:09.240 --> 33:15.080 I think we didn't have any measurements for this, but I think it should be generally 33:15.720 --> 33:23.640 with less overhead compared to, like, big hypervisors, like, KVM, because, or VM exit handler 33:23.720 --> 33:32.280 is much, has a much shorter path, and it's like the code that we wrote for it is, of course, 33:32.280 --> 33:38.440 shorter than what they wrote for KVM. But, I mean, in any case, if you are running hyperdvg 33:38.440 --> 33:44.600 in a nested virtualization environment, let's say, on a VMware, then the overhead of hyperdvg 33:44.600 --> 33:51.320 is also added to the overhead of the VMware Workstation nested virtualization, because Intel doesn't 33:52.200 --> 33:59.480 support, officially doesn't support nested virtualization in all, they just emulate everything, 33:59.480 --> 34:05.480 so there is no nested virtualization support.