WEBVTT 00:00.000 --> 00:12.400 So, all right, and hello, and thank you for joining us at our panoptic presentation. 00:12.400 --> 00:16.360 So, as a quick introduction, let us introduce ourselves. 00:16.360 --> 00:23.240 So, I'm a Phelic Salier, research engineer at Ceres, so it's a small digital method lab at 00:23.240 --> 00:24.720 Saban University. 00:24.720 --> 00:32.240 So, it's David Guelica, who is also a community scientist, and Edwach, who is a research 00:32.240 --> 00:34.960 engineer, but more in humanities. 00:34.960 --> 00:40.240 So, today we're going to show you the software we've built, we've been building for 00:40.240 --> 00:47.240 a couple of years, and this talk will be in three parts, I mean, there are three of us 00:47.240 --> 00:49.640 will be speaking each. 00:49.640 --> 00:54.240 So, I'm going to first talk a bit about the context, and what is it all about? 00:54.240 --> 00:59.880 I'm going to make maybe a quick demo, even as that's maybe a bit stupid to do this 00:59.880 --> 01:02.680 live, but we'll try. 01:02.680 --> 01:09.240 Edwach is going to show some research cases, and then David is going to talk a bit more about 01:09.240 --> 01:13.080 the architecture and our plugin system. 01:14.040 --> 01:19.200 First of all, I'd like to ask, if you're here, maybe do you work with images in your 01:19.200 --> 01:28.680 work, yes, some of you, okay, okay, you might be using maybe, we'll see. 01:28.680 --> 01:36.920 So, yeah, what's panaptic, so this is just a quick sneak peek, like a screenshot of what 01:36.920 --> 01:42.600 the software looks like, but I'm going to dive into that a bit later. 01:42.600 --> 01:48.360 First, I want to give a bit of context, so panaptic is a software for today, and make 01:48.360 --> 01:52.640 exploration of a medium to large data set of images. 01:52.640 --> 02:00.040 We say medium, like, from a couple of thousands of images to several hundreds, thousands 02:00.040 --> 02:01.040 of images. 02:01.040 --> 02:07.160 I mean, we've been working at maximum size, I think, like, 500,000 images. 02:07.160 --> 02:09.480 It was working pretty well. 02:09.520 --> 02:14.640 We've never tried a million, but maybe we'll find some day a use case that would work 02:14.640 --> 02:15.880 with a million. 02:15.880 --> 02:16.880 We'll see. 02:16.880 --> 02:25.320 So, we've been building this since May, 2023, so soon two years. 02:25.320 --> 02:30.920 I started a loan working on a prototype, and then was quickly joined by David, who is now 02:30.920 --> 02:37.120 the main developer, and by Edwach, who is our main crash tester. 02:37.120 --> 02:44.240 And then, some time along the way, we use also to work with a research designer to try 02:44.240 --> 02:50.160 to understand really the needs of the researcher and not be only computer scientists in 02:50.160 --> 02:54.560 a corner and just building all stuff that we found funny, but really try to understand 02:54.560 --> 02:57.560 what our software could be used to. 02:57.560 --> 02:59.440 And, of course, everything is open source. 02:59.440 --> 03:05.440 We wouldn't be there otherwise, and everything is on GitHub. 03:05.440 --> 03:11.480 So, one final piece of context, I'm going to talk about the origin of the project, 03:11.480 --> 03:14.680 what motivated us to build this. 03:14.680 --> 03:20.480 So, we had this researcher, it's called Dershini Shigia, that you may know or not, who 03:20.480 --> 03:27.320 was working on large data set of Twitter images, like she gathered data for, I think, almost 03:27.320 --> 03:34.880 10 years on different political controversy, and was trying to understand how far-right 03:34.920 --> 03:41.560 movements were using images to communicate and share their ideas. 03:41.560 --> 03:51.800 So, they collected data, a lot of data, actually 50,000 images at the end, unique images. 03:51.800 --> 03:57.000 And they had a problem, they faced, they had no tool to really analyze them, because it 03:57.000 --> 04:01.000 was quite a lot to just look manually. 04:01.040 --> 04:06.920 And also, their goal was to try to identify redundant images, which was quite hard to do, 04:06.920 --> 04:12.400 especially when images would have small variations, like cropping, textited, and something 04:12.400 --> 04:13.400 like that. 04:13.400 --> 04:17.840 So, yeah, they had Truppy, which is a great tool that you may already know, but it's 04:17.840 --> 04:22.640 really well to work with already created data sets. 04:22.640 --> 04:29.280 And it has the lack of automatic, maybe mentioning tool or something like that, to help 04:29.320 --> 04:34.440 the researchers to dive into the data sets, which can be really exhausting when you 04:34.440 --> 04:36.880 have to look everything by hand. 04:36.880 --> 04:41.720 So, we had this that we wanted to, and also we wanted to iterate, because at first, 04:41.720 --> 04:47.800 Vergeny would ask me a lot to do some Python trip to help use Python models and to do some 04:47.800 --> 04:54.120 computer visions on the data set, but it was really hard to do, like, some discussions. 04:54.120 --> 04:58.680 She would have new ideas after my work, then asked new questions, and I should do new 04:58.680 --> 05:01.640 stuff, and it was kind of long and exhausting. 05:01.640 --> 05:09.400 So, we thought, why not create our own software to try to work with images and to implement 05:09.400 --> 05:13.560 machine learning to this insight? 05:13.560 --> 05:18.240 So now, yeah, it's a part where I'm going to try to do a quick demo to you. 05:18.240 --> 05:23.680 It worked well with Olivier, so finger crossed it's going to work well with me too. 05:23.680 --> 05:28.240 So, yeah, yeah, you can see everyone. 05:28.240 --> 05:33.040 You have here the interface of Panoptic, with, like, a really small data set that we've 05:33.040 --> 05:40.680 imported to have a bit of context, this is images taken from Twitter, like, six-hour 05:40.680 --> 05:46.920 images, about the timetic of the Korean-plus-mong, which is another field of study here in 05:46.920 --> 05:51.360 our lab by studying the images of the far-right. 05:51.360 --> 05:56.880 So you can see the images, you can also see on the left here a column with all what's called 05:56.880 --> 05:58.560 the properties. 05:58.560 --> 06:03.360 Properties are additional data that we can show in the interface, and that comes along 06:03.360 --> 06:04.360 with the images. 06:04.360 --> 06:09.880 They're really important when you want to study the images inside their context of 06:09.880 --> 06:10.880 publication. 06:10.880 --> 06:15.640 By the way, if you want to look a bit more at the properties, we have, like, a table 06:15.640 --> 06:25.040 view where you can focus a bit more, for instance, on the text of the images. 06:25.040 --> 06:32.480 Now what's nice with these properties is that I can manipulate my data with these properties. 06:32.480 --> 06:40.720 So for instance, if I want to, I can create groups where I can create groups of images along 06:40.720 --> 06:44.960 the time, and I can choose the granularity of these groups. 06:44.960 --> 06:50.960 For instance, if I want all the images grouped by months, but I can make the same way 06:50.960 --> 06:56.240 I can make filters, or I can make sorting, and choose a lot of different properties to 06:56.240 --> 07:03.520 manipulate my images, and really create subsets of my big data to try to focus on certain 07:03.520 --> 07:05.520 points. 07:05.520 --> 07:10.920 I can also, of course, I directly properties already inside of Panaptic, so let's 07:10.920 --> 07:19.040 try and make, for instance, something called category will be, that will be a multi-tag. 07:19.040 --> 07:24.840 So now you can see that I have an empty field beside my, below my images that I can modify 07:24.840 --> 07:25.840 directly. 07:25.840 --> 07:30.640 So for instance, we can see here, it's a French politician called D'Armagnar. 07:30.640 --> 07:35.800 This is another French politician who's called Maréchal Le Pen, so I can do my small 07:35.800 --> 07:37.200 annotation. 07:37.200 --> 07:41.840 Now I don't want to do this by hand for the whole data set because that would take a 07:41.840 --> 07:43.680 lot of time. 07:43.680 --> 07:51.280 So introducing what's really make I think Panaptic interesting is the use of machine learning 07:51.280 --> 07:52.280 algorithm. 07:52.280 --> 07:58.240 Like, see, when you import images inside Panaptic, we use a deep learning model called 07:58.240 --> 08:03.720 Clip to compute embeddings of these images, and then to be able to use machine learning 08:03.720 --> 08:10.200 algorithm, just for instance, like camines, which is a way of creating groups automatically 08:10.200 --> 08:13.160 of your images based on their similarity. 08:13.160 --> 08:18.600 So I can click on the Create Cluster button, and you see, for instance, I will have all the 08:18.600 --> 08:24.760 images here of one specific street, I don't know in Paris, here only black background, 08:24.760 --> 08:30.060 which is a bit useless, here more TV screens, here, for instance, more pictures of the 08:30.060 --> 08:31.060 street. 08:31.060 --> 08:35.320 You see, there is a lot of virity because it's a generalistic model, and you can have a lot 08:35.320 --> 08:41.480 of stuff in here, and I can interactively say, okay, this cluster has a lot of things. 08:41.480 --> 08:43.960 Then you create more cluster inside. 08:43.960 --> 08:48.080 So I will do it again, and see some sub clusters. 08:48.080 --> 08:51.840 For instance, I will find another group here showing cups. 08:51.840 --> 08:55.840 So it could be interesting for me to annotate this. 08:55.840 --> 09:04.400 So I can do batch annotation, and tag the whole group, can take my property here, and 09:04.400 --> 09:10.760 create a cups, and see all the images are now tagged with a cups tag. 09:10.760 --> 09:18.240 Now, another way to use our tools, for instance, would be to group again by the category. 09:18.240 --> 09:23.880 See our cups, groups, and see, and ask ourselves the questions, do I have more cups in all 09:23.880 --> 09:25.480 data sets? 09:25.480 --> 09:29.600 And I can ask Panoptic to do some image suggestions. 09:29.600 --> 09:34.040 And for instance, here, I will have on the top side of my screen some proposition where 09:34.040 --> 09:39.760 I can zoom in, by the way, and I can accept them and put them automatically into my groups 09:39.760 --> 09:41.640 when they fit my needs. 09:41.640 --> 09:47.640 And I will add more and more cups to try to find a very coherent annotation of all the 09:47.640 --> 09:50.240 cups that I could find in my data set. 09:50.240 --> 09:56.800 So you could do that with a lot of things and interact a bit with your covers. 09:56.800 --> 10:03.840 One last thing that I want to show you is the last similarity tool could be the one image 10:03.840 --> 10:04.840 mode. 10:04.840 --> 10:10.680 If I click on an image, I can have all the properties shown, and also I can see all the 10:10.680 --> 10:16.080 others images sorted by similarity in the data set with by similarity score. 10:16.080 --> 10:17.960 Here it's a cross-in similarity. 10:17.960 --> 10:29.280 And I can see, for instance, if I want to, I can add more marion, for instance, and tag 10:29.280 --> 10:30.280 them. 10:30.280 --> 10:40.600 And, up, I apply this, and I get in my groups more marion than you can see. 10:40.600 --> 10:44.600 And again, I could ask for image suggestions to find the missing marion. 10:44.600 --> 10:49.080 So I can add them and find all the marion marishal in my data set. 10:49.080 --> 10:55.840 And finally, of course, I can export all the data as I will be annotated in Panoptic in CSV 10:55.840 --> 10:59.960 format to be used in another tool. 10:59.960 --> 11:11.160 And now, I'm going to show, I'm going to take this to Edward, who's going to talk about research. 11:11.160 --> 11:19.240 OK, so I'm going to show you a few examples of research project, currently using Panoptics. 11:19.240 --> 11:26.240 There can be a group in two, a circuit of degrees, for exporting large web corpora, large 11:26.280 --> 11:29.520 digitalized corpora or film corpora. 11:29.520 --> 11:35.200 And it is, of course, possible to imagine applications far beyond them. 11:35.200 --> 11:43.920 So here, as cases, I'm a personally involved in the exploration of large web corpora. 11:43.920 --> 11:50.240 At CRS, we work in part on the study of online controversies, on the construction of 11:50.320 --> 11:56.320 online public problems, and on conflictality around cultural events, for example. 11:56.320 --> 12:02.320 So we collect material from social networks, and weather, it's Twitter, Instagram, or TikTok, 12:02.320 --> 12:03.320 for example. 12:03.320 --> 12:14.480 We work with police, and in these cases, images can be a relevant entry point for exploring 12:14.480 --> 12:21.520 the corpore, typically, when we're looking at the visual dimension of the objects we study. 12:21.520 --> 12:25.200 So you can see some examples here. 12:25.200 --> 12:30.240 So first of all, the challenge is to be able to work with images associated with text, 12:30.240 --> 12:36.560 data, to work with some publications, but above all, Netflix said it, the challenge is to be 12:36.640 --> 12:46.080 able to explore and annotate large mass of images in a few a month of time. 12:46.080 --> 12:51.120 That's where image grouping best on similarity gets interesting. 12:51.120 --> 12:56.400 As do batch annotation, functionalities, so that we can understand what is in the corpus, 12:56.400 --> 12:58.640 and objectify it. 12:58.640 --> 13:06.320 And the tools need to be fairly modular on the question of similarity, because we 13:06.320 --> 13:11.200 don't always want to put images together from the same reasons. 13:11.200 --> 13:14.880 For example, on the left, it was my thesis work. 13:14.880 --> 13:24.240 I was interested about the yellow vest movement, and about the, I was looking to find the 13:24.240 --> 13:28.880 circulation of boobs of images, sharing the exact same origin. 13:28.880 --> 13:35.360 For example, different squid and shapes of same videos, the same video of the same picture, 13:35.440 --> 13:38.480 posted on another social network, for example. 13:38.480 --> 13:43.360 And on the right, this is a work on the spread of the raticity 13:43.360 --> 13:47.840 ideology of the breach replenishment, and this time, the problem is quite different, 13:47.840 --> 13:51.760 because the idea is reserved to detect similar objects. 13:51.760 --> 13:57.680 So for example, political personality, image of TV sets, or 13:57.760 --> 14:01.840 semi-autic material for a political communication, for example. 14:06.800 --> 14:10.240 And we also recently worked with the BB Attack National 14:10.240 --> 14:17.040 France, the French archives, which is, there is the corpus, 14:17.040 --> 14:23.760 have different, because we work with objects or images, which are photographed, 14:23.840 --> 14:29.360 then digitalized, and the problem is the same to a group 14:29.360 --> 14:35.520 images in function of the similarities, but the difference is about the noise. 14:35.520 --> 14:42.400 For example, when you are in digital methods, the problem of the noise is in the construction of 14:42.400 --> 14:46.320 the corpus, but not inside the images. 14:46.320 --> 14:50.720 And it's different in digital humanities, or there is no noise in the 14:50.720 --> 14:56.800 construction of the corpus, but you can have noise in the images itself, 14:56.800 --> 15:01.280 because for example, you can have backgrounds, or no backgrounds, 15:01.280 --> 15:03.120 when you take the photograph. 15:03.120 --> 15:11.760 So this can affect the ways of digitalize them, put the images together. 15:11.760 --> 15:17.600 And one last example, we have a doctorate student, Léonolfi, 15:17.600 --> 15:23.520 who work on identifying this well-code in costume films. 15:23.520 --> 15:27.120 She is, for example, interested in repeated shot composition in 15:27.120 --> 15:28.160 her corpus. 15:28.160 --> 15:33.680 So what we do is cut the films into images, take every five seconds, 15:33.680 --> 15:36.960 and we imported them into panoptic. 15:38.240 --> 15:41.760 And once in panoptic, we are able to find similar shots and 15:41.920 --> 15:46.240 the visual codes of the film genre, along all the films, 15:46.240 --> 15:50.480 and the almost 300 screenshots taken. 15:52.320 --> 15:59.840 I think, and I will let David speak, what we need to draw from this quick presentation of examples, 15:59.840 --> 16:05.040 is that there is a wide variety of reasons for exploring images, 16:05.760 --> 16:10.240 which implies adding a tool, a lot of great modular things, 16:10.240 --> 16:16.160 a use of similarity algorithms, that will never be totally adapted to all the types of images, 16:17.280 --> 16:20.080 and to all the questions we can ask them. 16:35.360 --> 16:43.360 So we don't have a lot of time left, so I will speak quickly about our architecture. 16:44.000 --> 16:48.960 So the storage is just an SQLite database, the backend is Python, 16:48.960 --> 16:56.000 and UI is a web UI, and we see here that we have loaded our plugin with 16:56.000 --> 17:02.160 the key factors in the face index that can communicate directly with the backend, 17:02.160 --> 17:05.920 and also insert data and the database if needed. 17:07.920 --> 17:14.640 So for the SQLite database, the advantage is that it's very easy to install, 17:14.640 --> 17:18.720 and also all the data is in one file, which means that if we want to share 17:19.440 --> 17:24.160 your panoptic project, you can simply copy the file and send it to another person, 17:24.160 --> 17:28.640 which can import it in the panoptic app, and it will work out of the box. 17:29.600 --> 17:34.640 And for the backend, we choose Python because it's easy to develop plugins, 17:34.640 --> 17:39.120 and it's a scripting language, so it works on every operating system, 17:39.760 --> 17:47.120 and also we have all the Python capabilities, which makes it much easier to use machine learning algorithm. 17:48.320 --> 17:53.840 For the front end, the idea was that HTML and CSS are well known, 17:53.840 --> 17:58.560 we don't need to use any UI library, because everyone already has a browser, 17:59.280 --> 18:05.200 and also we want to allow remote work as our next goals, 18:05.200 --> 18:09.040 so a browser-based approach sets the foundations for it. 18:10.400 --> 18:14.960 And now the most important part is the plugins, which are able to customize 18:15.840 --> 18:22.240 functionalities of panoptic, and we have three main actions, which are the clustering, 18:22.880 --> 18:26.960 the similarity, and also a more global execute function. 18:28.000 --> 18:34.080 And to give you an example of a personalization, we had to do in the lab, 18:34.080 --> 18:41.040 is the clustering of memes, and memes usually have always the same image, 18:41.440 --> 18:45.200 so clustering on image similarity alone is not very useful. 18:45.600 --> 18:51.840 We first have to extract the data and then do some special function that will cluster using 18:51.840 --> 19:00.080 the image, and also the meaning of the text, and to show you an example how it looks like in the UI. 19:00.720 --> 19:05.760 Here we have the create cluster button, where we can choose our clustering function, 19:05.760 --> 19:13.200 and in this case it's our panoptic ML compute cluster function that can also have some parameters. 19:13.200 --> 19:17.680 For example, the vector type and the number of clusters we want to do, 19:17.760 --> 19:22.480 and we can see in the bottom we have the signature of the function we define, 19:23.040 --> 19:28.800 and the parameters we give to the function will be shown in the UI, which is very convenient. 19:30.320 --> 19:36.240 At the same time we also have the similarity view, so for example here we have two images that 19:36.240 --> 19:43.200 use a different similarity function, and at the top image it's used the colors to find similarities, 19:43.280 --> 19:49.600 so we find the blue butterflies, and at the bottom image we don't use gray scale image, 19:49.600 --> 19:55.360 and we see that the results are different in the UI, and if you have special needs you can adapt 19:55.360 --> 20:03.840 your function to show it differently in the panoptic UI, and the last action is the execute action, 20:03.840 --> 20:11.920 it's more global action that is basically a way to execute any scripts on a collection of image, 20:12.000 --> 20:17.280 and the idea behind this is that you don't have to go out of the UI to execute scripts, 20:18.160 --> 20:24.240 generate data and import it back into panoptic, but you can do it everything from the UI, 20:24.800 --> 20:32.640 so this was the quick overview of the technical things of panoptic, and if you have any questions 20:32.640 --> 20:34.560 if you're free to ask, and thank you for listening. 20:42.880 --> 20:46.880 Yeah, let me put it there. 20:46.880 --> 21:10.880 I mean, can you repeat the good? Okay, so, yeah, so if I understand what I'm talking about, 21:10.960 --> 21:19.040 oh, can you repeat the good? Okay, so, yeah, so if I understand correctly your question is like 21:19.040 --> 21:28.720 can we trust the similarity score that's shown in the, yeah, okay, so the similarity score, 21:29.600 --> 21:38.080 I would say it will depend on your data set, because sometimes like similarity score of like 99% 21:38.560 --> 21:46.800 will be like really, really similar images indeed, but it will also depend on the coherence 21:46.800 --> 21:54.080 of your data set, if you have like very variable images, then maybe the images that are really, 21:54.080 --> 21:59.440 really similar will have a really high similarity score, but if all the images in your data set 21:59.520 --> 22:09.040 really look alike, then maybe 99 won't be that representative, so yeah, you need to adapt this 22:09.040 --> 22:15.920 to try to understand the similarity score. Other questions, maybe? Yeah, I should show 22:17.120 --> 22:24.080 from the question that was just asked, are you aware of any studies using this tool, 22:24.160 --> 22:30.400 right, so different data sets, that's a very informative, so we're interested in chemoinformatics, 22:30.400 --> 22:39.600 because the way you build this embedded system and human-assisted exploration of the 22:40.000 --> 22:53.680 data sets, and so the UI, it's very interesting. Okay, so the question is, are we aware of 22:55.280 --> 23:04.880 other scientific like biomedical or bioengineering, like that research, biology chemistry research, 23:05.840 --> 23:13.600 that are using panoptic, the answer is right now, no, we work mainly with digital humanities 23:14.800 --> 23:20.480 and media studies, but it would be really interesting to try to apply it to new fields. I think 23:20.480 --> 23:26.800 David had the contact with someone's working on a natural park, so it's not really biology, but they 23:26.800 --> 23:33.680 were trying to study, so photo traps, pictures of animal at night and stuff and trying to categorize 23:33.680 --> 23:40.480 them, and we're trying to write now to use that data set, but we don't have access to it right now, 23:41.040 --> 23:47.520 but I don't see why it wouldn't work, actually, especially since you can eventually 23:48.080 --> 23:54.560 use your own model, if the current use is not specified enough, so yeah. 23:59.200 --> 24:06.960 Yes, I was wrong, it is an off-dig in taking a meter data into account, like to let's say 24:06.960 --> 24:13.760 it's not possible that a Tesla is on the image that from 1991 is such a thing, 24:13.760 --> 24:21.760 so the question is, how strong is panoptic to taking a data into account, for instance, 24:21.760 --> 24:30.960 can we have a filter could be like, it's not possible to have a Tesla, which was posted in 1991, 24:30.960 --> 24:36.640 that's correct, right, that's your question. I mean, the filters you define it to yourself, 24:36.720 --> 24:44.880 so you can, but the filters are not used to compute the similarity or to do whatever you want. 24:44.880 --> 24:51.360 I mean, if you want to filter all the images in your data set that are later than 1991, you can, 24:51.360 --> 24:58.480 do it if you have the metadata, and if you want to cluster right them or do some analysis inside them, 24:58.480 --> 25:05.840 you can, but you need to do it manually. I mean, it won't be automatic to detect fake data 25:05.840 --> 25:12.560 because like it's an image from 1991 and you have a Tesla on it, so it's fake data to fake image. 25:12.560 --> 25:17.760 That's something that you need to figure out yourself by using the tool, maybe by finding 25:17.760 --> 25:24.240 all the Tesla in your data set, and then grouping by dates and seeing if you have Tesla images 25:25.680 --> 25:30.240 that are in 1991, that you can do yes, but it's not full-years metric. 25:30.880 --> 25:36.880 But you can do also a plug-in that could do that. Yeah, yeah. 25:36.880 --> 25:40.960 Is it possible to use a weak plug-in UI logic, like for example, 25:40.960 --> 25:46.240 suppose I extract something on the image, text for the previous thing, or objects, 25:46.240 --> 25:50.640 but I visualize it or take it and you know how to make what has been. 25:51.840 --> 25:59.920 So the question is, how easy would it be to visualize a data that I would extract myself in a plug-in? 26:01.200 --> 26:10.320 That's right. Yeah. So yeah, especially for OCR text, it's actually quite easy because like in your plug-in, 26:10.880 --> 26:17.280 you can create a new property, which would be called OCRized text, and then you would have access 26:17.280 --> 26:26.000 to this property in the global UI. So you could see your text as a property and you could work with that in the UI. 26:26.080 --> 26:33.680 So it wouldn't be a novel, we don't have the layers, we don't have the layers, 26:34.880 --> 26:39.040 but we are really thinking about that, especially for objects' instructions, 26:39.440 --> 26:45.040 that could be quite useful for identifying some part of the images, but we don't have that right now, 26:45.040 --> 26:50.320 but we are really thinking about it, and I want to do it. I think there was a, do we have time? 26:50.320 --> 26:55.680 Yeah. Yeah. I've been using penaptic actually on the database of the data. Really? 26:55.680 --> 27:00.880 20, and it's cool, it's really great contracting with it, but I had the deployment issue, 27:00.880 --> 27:06.160 because I was deep on a GPU to be able to compute it, should they print on a server with GPU? 27:06.160 --> 27:12.560 And I would like to propose it to users and users, and I don't want to give the access to my server with GPU. 27:13.520 --> 27:19.920 So how easy is it to take this, kill it, and have the only available property for instance, 27:19.920 --> 27:24.560 and put this on some of the server where only the corporations would be computing, 27:24.560 --> 27:30.480 but you don't need as much power as a calculation. So if it's a scenario, that's going to cost a lot. 27:30.480 --> 27:39.680 So the question is, is it possible to share a project easily from a server where everything was previously computed, 27:39.840 --> 27:43.600 and then to share it to a new user who don't have a GPU? 27:43.600 --> 27:51.840 First, I want to say, this is a great question, and it panetic really works on PC that don't have GPU, actually. 27:51.840 --> 27:59.040 So, a bit of context. But then it would work for you. I mean, you can embed everything in the 27:59.040 --> 28:04.720 single SQL file, even the images. I mean, you have an option where you can say, can you store 28:04.800 --> 28:11.600 the images directly in the SQL file, so that's really easy to share afterwards. So if you have 28:11.600 --> 28:17.280 another panetic, maybe on another server or on another computer, you can just take the SQL file, 28:17.280 --> 28:24.400 so the panetic points DB, and create a project and import it in your new panetic instance. 28:25.040 --> 28:31.600 So yeah, it's maybe you will have some bugs, and you can write us, but it's supposed to be working 28:31.600 --> 28:36.480 from scratch. Yeah. Yeah. Yeah. Yeah. Let's take a picture.