\chapter{Literature Review} \section{Introduction to the Literature Review} % Write 1–2 paragraphs that: % - State the purpose of this chapter (to situate your work in existing scholarship). % - Explain how you’ve structured the review (chronological, thematic, methodological). % - Briefly preview the kinds of sources you will cover. The purpose of this chapter is to situate this work within the context of existing scholarship. The chapter is organized chronologically and reflects my personal journey along this path. Because my synthesis of the field is genuinely novel, most of the sources discussed are my own contributions, accompanied by testimony that contextualizes and motivates these developments. \section{Historical Background and Evolution of the Field} % Write 2–3 paragraphs that: % - Trace the development of the field/problem area from its origins to the present. % - Identify major milestones, shifts in thinking, or influential publications. % - Provide enough history for context but keep the focus on ideas relevant to your problem. I began as a complete amateur, initially motivated by a desire to create mathematical textbook illustrations. Over time, I developed a particular interest in \(3\)D illustrations, but quickly grew dissatisfied with the state of the available tools and resolved to build them myself. My first attempt involved tessellating a surface into quadrilaterals, but I soon realized that the occlusion was incorrect. At the time I was taking my first course in linear algebra, and with the help of ChatGPT \cite{skillmon2025answer3dpointsort}, I recognized that the dot product could be used to approximate depth-ordering by midpoint comparisons. I later refined this idea through a Mathematics Stack Exchange inquiry \cite{5062592}. By intuition, I also realized that tessellating into triangles was more coherent than quadrilaterals, which are often non-coplanar. Initially, I attempted to sort the triangles by their midpoints, but the results were still inadequately occluded. This led me to another inquiry on occlusion ordering of non-intersecting, non-cyclically overlapping triangles \cite{5063772}, where a pivotal suggestion was made in the comments: to sort only simplices whose orthogonal projections on the viewing plane overlap. This insight proved to be the breakthrough. From this seed grew the present occlusion algorithm. In the final system, simplices are first projected onto the viewing plane; when an overlap is detected, sorting is resolved by using the inverse orthogonal projection back onto both shapes. Although it took months of refinement, this ultimately yielded a coherent and general solution. The occlusion problem also naturally arises from intersecting and cyclically overlapping simplices. My algorithm addresses the first case by completely eliminating intersections through partitioning, thereby establishing a foundation alongside the occlusion algorithm. The resolution of cyclic overlap is left for future work. \section{Existing Approaches in Practice and Academia} % Write several subsections if needed. % - Summarize the main theories, methods, frameworks, or technologies used. % - Organize by theme, not by individual paper, to avoid a list-like review. % - Show both academic and applied/practical approaches (tools, systems, case studies). Until now, no comprehensive solution to the problem of clipping and occlusion ordering of affine simplices has existed. Existing practices in \(3\)D mathematical illustration and computer graphics address only fragments of the problem. For example, many software systems triangulate surfaces and order the resulting triangles using primitive heuristics, but they do not perform clipping of intersecting simplices, and their occlusion methods are often approximate or incorrect. Much of the literature in both academia and practice focuses on partial approaches---such as tessellation, polygon clipping, or depth-sorting---but not their coherent integration into a rigorous framework. This fragmentation explains why current tools, despite producing visually plausible illustrations, frequently introduce occlusion errors when tested under precise mathematical scrutiny. The work presented here unifies these scattered threads into a single coherent system, addressing both intersection elimination and rigorous occlusion ordering. This synthesis has not been achieved previously in the literature or in practice. \section{Comparative Analysis of Approaches} % Write 2–3 paragraphs that: % - Compare the main approaches head-to-head. % - Highlight similarities and differences in assumptions, scope, and outcomes. % - Use a table or figure if comparisons are complex (e.g., feature-by-feature). When comparing existing approaches to the method implemented in \luatikztdtools{}, a fundamental distinction emerges. Existing tools and frameworks for \(3\)D illustration—whether in academic research or practical software—typically handle only parts of the problem. Some systems focus on tessellation, others on clipping, and still others on heuristic depth-sorting. However, none integrate these processes into a coherent framework capable of systematically handling intersections and providing rigorous occlusion ordering of affine simplices. By contrast, the algorithm presented here addresses both of these challenges directly. It introduces a systematic clipping procedure to eliminate intersections and a transitive partial-order approach to occlusion sorting. No existing approach in either the academic literature or applied software achieves this combination. In short, while prior methods provide partial or approximate solutions, the present work provides the first complete and rigorous solution to the problem of visualizing \(3\)D parametric scenes composed of affine simplices. \section{Strengths and Contributions of Current Work} % Write 1–2 paragraphs that: % - Acknowledge what existing research has accomplished successfully. % - Highlight the most impactful results, innovations, or insights. % - Show balance: this is not only a critique but also recognition of contributions. The present work successfully demonstrates the systematic illustration of \(3\)D scenes composed of affine simplices. This was achieved through the use of affine linear algebra to explicitly define simplices and to traverse their spans. The span of a simplex is the set of points obtained by taking linear combinations of its basis vectors, originating from a given point. A further breakthrough came from reducing the problem to its simplest building blocks: the intersection and occlusion of the lowest-dimensional simplices. From there, higher-dimensional simplices, such as triangles, were treated in terms of these more elementary cases. This reductionist approach provided a clear and coherent foundation for solving the problem at scale. \section{Limitations and Open Challenges in Current Methods} % Write 2–3 paragraphs that: % - Point out unresolved problems, weaknesses, or gaps in coverage. % - Explain why those challenges persist (technical, conceptual, resource-related). % - Make the case that further work is necessary (leading toward your contribution). This documentation does not, in its current form, address cyclically overlapping simplices; this challenge is reserved for future work. The decision to defer this aspect was intentional, in order to secure authorship priority on the core contribution before extending the framework further. Future versions will also introduce the ability to clip parametric objects---that is, collections of simplices belonging to the same object---by themselves. This will enable the computation of intersection sets as new, customizable parametric objects, thereby broadening the applicability of the package. \section{Emerging Trends and Future Directions} % Write 2 paragraphs that: % - Identify new developments (technologies, theoretical frameworks, pedagogical methods). % - Indicate how these trends might shape the field in the near future. % - If appropriate, note how these align with or diverge from your own approach. Recent developments in computer graphics emphasize the pursuit of photorealism, with ray tracing technologies at the forefront. These methods simulate lighting, reflection, and refraction effects at a high degree of physical accuracy. While such techniques are powerful, they are computationally intensive and primarily focused on rendering dense, visually realistic scenes rather than sparse, mathematically rigorous ones. Nevertheless, the conceptual framework of ray tracing demonstrates the importance of light-based visibility models, which may offer inspiration for extending methods of occlusion ordering in the future. In contrast, the approach taken in this work focuses on exact algebraic partitioning and sorting of affine simplices, prioritizing mathematical rigor over visual realism. Future research may explore a hybrid direction, drawing selectively from photorealistic methods while preserving the precision needed for mathematical illustration. Such a synthesis would represent a promising new trajectory: combining the clarity and correctness demanded by pedagogy with techniques inspired by the broader field of computer graphics. \section{Proposed Approach and Its Advantages Over Existing Work} % Write 2–3 paragraphs that: % - Transition from reviewing others’ work to presenting your own. % - Explicitly connect the gaps you identified with how your approach addresses them. % - Highlight advantages without overstating novelty (frame as building on prior work). The limitations of existing methods are clear: they either neglect intersections altogether, rely on approximate sampling, or obscure their logic within black-box implementations. As a result, mathematical illustrators are left without a reliable way to render sparse parametric geometries in three dimensions without visible errors. This is the precise gap that motivates the present work. The approach developed in \luatikztdtools{} addresses this gap by providing a systematic method for clipping intersecting simplices and establishing a rigorous transitive partial order for occlusion sorting. Unlike prior tools, this method is fully transparent in its logic, grounded in affine linear algebra, and designed specifically to ensure correctness even for sparse configurations where high-sampling techniques fail. The principal advantage of this work is its ability to produce illustrations that are both mathematically precise and computationally efficient. By reducing the problem to its most fundamental constituents—simplices—and resolving both intersections and occlusion ordering at that level, the method achieves a coherence that existing approaches lack. This framework not only serves the immediate needs of mathematical illustration, but also establishes a foundation that can be extended to broader areas of computer graphics. \section{Summary of the Literature Review} % Write 1 paragraph that: % - Summarizes the key themes and insights from the review. % - Restates the limitations in current work and the opportunity for contribution. % - Provides a smooth transition into the Methodology chapter (i.e., how you will address the gaps). The review of existing tools and practices makes one thing clear: the available methods were inadequate for producing precise \(3\)D mathematical illustrations. They either ignored intersections, introduced occlusion errors, or relied on approximate techniques such as high sampling. Bound by the limitations of the time, I was compelled to invent my own approach. While this work does not yet address the final challenge of cyclic overlaps, it establishes a rigorous framework for clipping intersecting simplices and systematically ordering them by occlusion. The next chapter will present the methodology by which this is achieved.