1.Temporal: A nine-year journey to fix time in JavaScript(Temporal: A nine-year journey to fix time in JavaScript)
Summary of "Temporal: The 9-Year Journey to Fix Time in JavaScript"
The article, written by Jason Williams, discusses the development of the Temporal API in JavaScript, aimed at improving date and time handling. Initially, JavaScript's Date object was based on Java's model from 1995, which became problematic over the years as JavaScript evolved and was used in more complex applications.
Key points include:
-
Problems with Date: The existing Date object had issues like mutability, inconsistent month arithmetic, and ambiguous parsing, leading developers to create workarounds and use external libraries like Moment.js to handle dates and times.
-
The Need for Temporal: Recognizing the shortcomings of Date, a proposal for Temporal was initiated in 2017 to create a more robust, immutable, and timezone-aware date and time API.
-
Development Process: The proposal went through various stages in the TC39 committee, gathering input from multiple organizations, including Bloomberg and Igalia, and eventually gained support from several key contributors in the JavaScript community.
-
Features of Temporal: Temporal introduces several new types, such as
Temporal.ZonedDateTimefor managing time zones andTemporal.Instantfor precise moments in time, along with support for multiple calendars and durations. -
Implementation Challenges: The implementation was complex, with a large specification and the need for efficient performance across different browsers. A collaborative approach led to the creation of a shared library,
temporal_rs, to facilitate the implementation across engines. -
Current Status: As of March 2026, Temporal has reached Stage 4 in the TC39 process, meaning it's officially part of the ECMAScript standard. It is supported in several environments, including major browsers and Node.js.
-
Future Work: There are still challenges ahead, such as ensuring compatibility with existing web APIs and improving integration with tools like date pickers.
Overall, Temporal represents a significant advancement in JavaScript's handling of dates and times, addressing long-standing issues and demonstrating successful collaboration within the programming community.
2.Making WebAssembly a first-class language on the Web(Making WebAssembly a first-class language on the Web)
This post expands on a presentation about WebAssembly (Wasm) given at the 2025 WebAssembly CG meeting. Since its launch in 2017, WebAssembly has significantly evolved, adding features like shared memories, SIMD, exception handling, and garbage collection, allowing more languages to use it effectively. However, it still struggles with wider adoption on the web because it is considered a "second-class" language compared to JavaScript, which is more integrated into the web platform.
Key issues include:
-
Loading Code: Loading WebAssembly is more complex than loading JavaScript, requiring cumbersome API calls instead of simple script tags.
-
Using Web APIs: WebAssembly relies on JavaScript to access web APIs, complicating the process and requiring additional "glue code" to bridge the two.
-
Developer Experience: The overall developer experience for WebAssembly is perceived as inferior to JavaScript, making it less appealing for average developers.
-
Documentation: Most web documentation is tailored for JavaScript, making it harder for non-JavaScript developers to understand Web APIs.
-
Performance Overhead: The glue code adds performance costs, and calling web APIs can be slower for WebAssembly compared to direct JavaScript calls.
To address these challenges, the proposal for WebAssembly Components has emerged, which aims to provide a standardized, self-contained way to create and use WebAssembly modules that can directly interact with web APIs without JavaScript. This model could streamline development and improve the overall experience for developers.
In summary, while WebAssembly has made great strides since its inception, it needs further integration with the web platform to become more accessible and beneficial for all developers, not just those with extensive resources. The development of WebAssembly Components is a promising step towards achieving this goal.
3.Entities enabling scientific fraud at scale (2025)(Entities enabling scientific fraud at scale (2025))
No summary available.
4.BitNet: 100B Param 1-Bit model for local CPUs(BitNet: 100B Param 1-Bit model for local CPUs)
No summary available.
5.Klaus – OpenClaw on a VM, batteries included(Klaus – OpenClaw on a VM, batteries included)
Bailey and Robbie are developing Klaus, a user-friendly and secure platform for OpenClaw, which is an AI tool. Setting up OpenClaw typically requires complex steps like configuring cloud virtual machines or containers, but Klaus simplifies this by providing each user with their own preconfigured EC2 instance. They also offer easy integration with services like Slack and Google Workspace through OAuth apps.
To enhance security, Klaus operates on a private network, keeps OpenClaw updated, and ensures that user data is protected. They acknowledge some risks, especially when connecting email accounts, and recommend using Opus 4.6 for better security against certain threats.
In the past month, they have learned a lot about managing infrastructure and have written best practices for using OpenClaw on AWS. They also created ClawBert, an AI tool that helps automatically fix issues with OpenClaw instances.
Klaus offers different pricing plans and provides users with credits to use on their platform. They are interested in hearing from users about what they are building with OpenClaw to improve their service and support.
6.Where Some See Strings, She Sees a Space-Time Made of Fractals(Where Some See Strings, She Sees a Space-Time Made of Fractals)
Astrid Eichhorn, a physicist at Heidelberg University, is exploring how the laws of physics behave at very small scales, particularly in a field called asymptotic safety. Unlike some theories that suggest the universe is made of strings or loops, Eichhorn believes that if you zoom in far enough, the laws of physics might stabilize and stop changing—similar to a fractal structure.
At tiny scales, traditional physics struggles, especially with gravity. Eichhorn's work suggests that quantum fields could balance themselves in a way that allows for predictable behavior across different scales. She and her collaborators have worked on models that support this idea, finding fixed points in their calculations where physical laws remain consistent.
Eichhorn's research has implications for understanding particle masses, including the Higgs boson and quarks, suggesting a connection between gravity and other forces. She also considers how her findings might relate to dark matter research, indicating that certain popular dark matter models may not fit with her theory.
Overall, Eichhorn's work aims to unify our understanding of gravity with other fundamental forces, encouraging a humble approach to the ongoing exploration of quantum gravity.
7.5,200 holes carved into a Peruvian mountain left by an ancient economy(5,200 holes carved into a Peruvian mountain left by an ancient economy)
No summary available.
8.I built a tool that watches webpages and exposes changes as RSS(I built a tool that watches webpages and exposes changes as RSS)
Site Spy is a tool I created after missing an important visa appointment because I didn't notice changes on a government webpage. It monitors specific parts of webpages for updates and shows changes in an easy-to-understand format.
Key features include:
- Tracking specific elements like prices or headlines instead of entire pages.
- Viewing differences and a timeline of changes.
- Receiving updates through RSS feeds, browser notifications, email, or Telegram.
- Available as a Chrome and Firefox extension with a web dashboard.
I'm seeking feedback on two points:
- Is RSS a useful way to receive updates, or do people prefer direct alerts?
- Is tracking specific elements better than monitoring an entire page?
9.The MacBook Neo(The MacBook Neo)
The article discusses comments made by the co-CEO of Asus regarding Apple's new MacBook Neo. He believes that this device is surprising and could significantly impact the PC industry. The MacBook Neo introduces innovative features that may change how laptops are designed and used, potentially pushing other companies to rethink their products.
10.Open-source browser for AI agents(Open-source browser for AI agents)
The author has created a tool called agent-browser-protocol (ABP) by forking Chromium to improve browser interactions for AI agents. They observed that many issues arise not from the AI misunderstanding web pages, but from it using outdated information. ABP keeps the agent in sync with the browser by freezing actions (like clicks or typing) and capturing the current page state and significant events (like alerts or downloads) before proceeding.
This approach allows the interaction to resemble a back-and-forth conversation, where the agent acts, receives updated information, and then decides on the next steps. ABP helps solve common issues, such as pop-ups blocking inputs or downloads that the agent cannot track.
In testing, ABP achieved a score of 90.5% on a benchmark, suggesting that modern AI models can understand websites well when given the right tools. The author invites questions and provides instructions for trying out ABP, along with a demo video link.
11.Wiz joins Google(Wiz joins Google)
Wiz has officially joined Google, aiming to enhance cloud security by combining its innovation with Google's scale. Their mission is to help organizations protect everything they build while adapting to the fast-paced changes brought by AI.
Over the past year, Wiz has achieved significant milestones in security research, uncovering vulnerabilities that protect both their customers and the wider industry. They have developed new tools like the Wiz AI Security Platform and Wiz Exposure Management to secure AI applications and provide a comprehensive view of risks.
As part of Google Cloud, Wiz plans to integrate advanced AI capabilities into their platform, continuing to serve a multi-cloud environment and protecting customers across various cloud services. Their commitment to security innovation remains strong, and they are eager to meet the challenges of modern security threats.
Wiz thanks its customers for their trust and emphasizes that they will continue to lead in security solutions as they move forward.
12.Prism (YC X25) – Workspace and API to generate and edit videos(Prism (YC X25) – Workspace and API to generate and edit videos)
Rajit, Land, and Alex are creating Prism, an AI video creation platform and API. Prism allows users to easily remix videos and automate user-generated content ads without the hassle of switching between multiple tools.
Key features of Prism include:
- A timeline editor where users can generate images and video clips, and assemble them all in one place.
- The ability to test different models and settings for a clip without needing to export and re-import files.
- Support for templates, which can be reused to streamline the video creation process.
- An API that lets AI agents automatically generate videos using community templates.
They developed Prism because they found existing tools cumbersome and time-consuming. With Prism, users can create, review, edit, assemble, and export videos all in one platform.
Pricing is based on usage credits, with a free tier available to try out the service without needing a credit card. They are seeking feedback from users about their experiences and challenges in AI video creation.
13.Searching for the Agentic IDE(Searching for the Agentic IDE)
I'm sorry, but I can't access external links or content directly from URLs. However, if you provide the text you'd like summarized, I'd be happy to help you with that!
14.Lego's 0.002mm specification and its implications for manufacturing (2025)(Lego's 0.002mm specification and its implications for manufacturing (2025))
LEGO bricks made in different years and countries fit together perfectly due to the company’s strict manufacturing standards, maintaining tolerances of just 0.01mm. This precision is essential because even slight variations can prevent bricks from fitting or holding together properly.
LEGO's success comes from not just precise molds but also from tightly controlled manufacturing processes. They switched to ABS plastic in 1963 for better consistency in shaping. The molds are crafted with high precision using advanced techniques, and each mold cavity is tracked to ensure quality.
However, achieving such standards comes with challenges, like managing how different colors of plastic shrink and ensuring larger pieces remain flat. LEGO prioritizes interchangeability and rejects parts that don't meet specifications, resulting in some waste but ensuring all bricks work together seamlessly.
The broader lesson from LEGO is that effective process control is more important than just striving for perfect molds. Understanding the required tolerances for a product is crucial, as not all items need the same level of precision. Manufacturers should design their systems based on what truly matters to their customers while balancing quality and cost.
15.Fungal Electronics (2021)(Fungal Electronics (2021))
Fungal electronics are living electronic devices made from mycelium, the root structure of fungi. These devices can change their electrical properties and produce electrical signals when influenced by external factors. They can be integrated into fungal materials, used in wearable technology, or function as independent sensors and computing devices.
16.How we hacked McKinsey's AI platform(How we hacked McKinsey's AI platform)
McKinsey & Company has developed an AI platform called Lilli for its employees, which features chat, document analysis, and search capabilities. Launched in 2023, it has been adopted by over 70% of its 43,000 staff, processing over 500,000 prompts monthly.
However, a research team successfully attacked Lilli without needing credentials or insider knowledge. Within two hours, they gained complete access to McKinsey's production database, exploiting a publicly accessible API endpoint that was vulnerable to SQL injection. This allowed them to extract sensitive data, including 46.5 million chat messages, numerous documents, and details about 57,000 user accounts.
The attack also revealed that McKinsey's AI system prompts, which control how the AI operates, were stored in the same database. This means an attacker could potentially alter how Lilli functions, leading to compromised advice and data leaks without detection.
This incident highlights a significant security gap, as it involved a well-resourced firm like McKinsey using a common vulnerability (SQL injection) that their security measures overlooked. The findings emphasize the need for organizations to secure not just code and servers, but also the AI prompts that govern their systems, as they are becoming critical assets.
The research team operates an autonomous security platform called CodeWall, which aims to provide continuous AI-driven security testing for other organizations. The vulnerabilities were disclosed to McKinsey, leading to prompt actions to secure their system.
17.Vanilla JavaScript refinery simulator built to explain job to my kids(Vanilla JavaScript refinery simulator built to explain job to my kids)
A chemical engineer from Texas created a 5-minute browser game to explain refinery operations in an engaging way. The game covers processes like desalting, distillation, and gasoline blending. Although not a software developer, he used language models to help him code a 9,000-line application with HTML, CSS, and JavaScript, including physics-based minigames.
While building the game, he faced challenges such as managing large code files, ensuring physics worked well with CSS, and handling mobile browser behaviors. He also had to create functions to manage memory effectively as the game transitions between different phases. The game is free to play, runs directly in the browser, and does not require ads or sign-ups. He welcomes feedback and questions about both the game mechanics and the science behind it.
You can play the game here: Fueling Curiosity Game.
18.Swiss e-voting pilot can't count 2,048 ballots after decryption failure(Swiss e-voting pilot can't count 2,048 ballots after decryption failure)
A Swiss canton, Basel-Stadt, has halted its electronic voting pilot after being unable to count 2,048 votes from a recent national referendum due to technical issues with USB keys used to decrypt the votes. This pilot, which aimed to assist Swiss citizens living abroad and those with disabilities, faced problems despite having the correct codes. Officials have advised participants to vote by paper instead, although this was not feasible for many.
The incident prompted an external investigation, and the canton expressed regret over the impact on voters' rights. Although the votes affected were a small portion of the total and wouldn’t change the outcomes, the confirmation of voting results has been delayed until March 21. E-voting in three other cantons and the national system remains unaffected.
Switzerland is testing e-voting in four cantons to improve voting for citizens abroad, following a previous failed attempt in 2019 due to security concerns.
19.Satellite imagery object detection using text prompts(Satellite imagery object detection using text prompts)
I created a web tool that detects objects in satellite images using vision-language models (VLMs). You can select an area on the map and type in a prompt like "swimming pools" or "buses." The tool analyzes the area piece by piece and shows the results on the map.
Here's how it works:
- Choose an area and zoom level.
- Divide the area into smaller tiles.
- Analyze each tile with your prompt using a VLM.
- Convert the detection results into geographic coordinates.
- Display the findings on the map.
The tool performs well for clear structures, but it struggles with hidden objects, where specialized detectors like YOLO are better. There’s a public demo available that doesn’t require a login. I'm looking for feedback on how well it detects objects, the performance of VLMs versus specialized detectors, and possible real-world applications.
20.I built an ISP infrastructure emulator from scratch with a custom vBNG(I built an ISP infrastructure emulator from scratch with a custom vBNG)
Summary of Aether Project
Aether is a lab designed for Internet Service Provider (ISP) infrastructure that simulates subscriber management for IPv4 networks. Built by a computer science sophomore over a month, it features a Python-based virtual Broadband Network Gateway (vBNG) with RADIUS authentication and traffic management.
Motivation: The creator previously struggled with networking tasks during an internship without guidance. This project aims to help others in similar situations by providing a starting point for understanding ISP infrastructure.
Architecture: The BNG uses an event-driven model to manage session states and communicates through messages. It collects data using Redis Streams and processes it with a component called bng-ingestor.
Traffic Simulation: Traffic is generated using a simulator that runs commands on specific hosts. The project includes a configuration system that simplifies network topology definitions.
Limitations:
- Performance decreases significantly with multiple virtual hops in the network.
- Some advanced features like iBGP, VLAN, and IPv6 support are missing, as the focus is solely on IPv4 networks.
The creator learned a lot about networking through this project and welcomes feedback on the code.
Questions: The user connection circuit is currently chosen randomly in the demo. The creator questions how a real ISP would efficiently assign circuits to customers based on their location.
21.Sentrial (YC W26) – Catch AI Agent Failures Before Your Users Do(Sentrial (YC W26) – Catch AI Agent Failures Before Your Users Do)
Neel and Anay are creating a tool called Sentrial, which helps monitor AI products in production. It automatically identifies issues like failures, mistakes, and user frustrations as they happen. When problems occur, Sentrial analyzes the data to find the cause and suggests fixes.
They experienced challenges debugging AI agents in their previous jobs and noticed that teams struggle to understand why issues happen without proper monitoring. For instance, they saw cases where support agents misclassified requests or generated incorrect outputs.
Sentrial aims to provide essential monitoring for AI products by using a simple SDK integration that detects problems like wrong tool uses and quality regressions before customers notice them. They offer a free trial for users to test the service and welcome feedback from anyone using AI agents.
22.Building a TB-303 from Scratch(Building a TB-303 from Scratch)
No summary available.
23.Visualizing Ukkonen's Suffix Tree Algorithm(Visualizing Ukkonen's Suffix Tree Algorithm)
The text discusses the challenges of understanding algorithms, particularly Ukkonen's Suffix Tree Algorithm, which is used for efficient substring searching. The author reflects on their learning journey, starting with the dense textbook "Introduction to Algorithms" and later encountering practical difficulties when implementing algorithms from papers.
Key points include:
-
Learning Gaps: Textbooks provide theoretical knowledge but often lack practical visualizations that help in truly understanding algorithms.
-
Implementation Challenges: Ukkonen's algorithm is complex, involving tree manipulations that are not straightforward to grasp through pseudocode alone.
-
Visualization Solution: The author created an interactive visualization using JavaScript and D3.js to illustrate how the suffix tree is built step-by-step. This allows users to see the entire data structure in action and enhances understanding.
-
Generalization: The author believes that this visualization approach can be applied to other data structures, making learning algorithms more accessible.
-
Learning with AI: The text also touches on the potential of AI tools, like large language models (LLMs), to assist in learning algorithms by providing explanations and visualizations tailored to individual learning styles.
Overall, the author emphasizes the importance of visual tools in bridging the gap between theoretical algorithms and practical understanding, highlighting the evolving ways we can learn and teach programming concepts.
24.Zig – Type Resolution Redesign and Language Changes(Zig – Type Resolution Redesign and Language Changes)
Devlog Summary
This devlog highlights recent updates to the Zig programming language in 2026. Here are the key points:
-
Type Resolution Redesign (March 10, 2026):
- A major update to the Zig compiler's type resolution logic was completed after months of work.
- The compiler is now more efficient, only analyzing types when necessary, which reduces compile errors related to unused fields.
- Improved error messages for dependency loops make it easier to identify and fix issues.
- Incremental compilation is faster due to bug fixes and reduced unnecessary work.
-
New I/O Implementations (February 13, 2026):
- Implementations for
io_uringand Grand Central Dispatch (GCD) have been added to Zig's I/O standard library. - These features are still experimental and require further improvements, such as better error handling.
- Implementations for
-
Package Management Enhancements (February 6, 2026):
- Packages are now stored locally in a new
zig-pkgdirectory, facilitating easier editing and experimentation. - A global cache of dependencies is also maintained to simplify sharing between systems.
- A new
--forkoption was introduced for thezig buildcommand, allowing users to override dependencies with local forks easily.
- Packages are now stored locally in a new
-
Native API Usage (February 3, 2026):
- Zig is moving towards using native Windows APIs (ntdll) instead of higher-level wrappers (kernel32), improving performance and reducing unnecessary resource usage.
-
Transition to Zig libc (January 31, 2026):
- Zig is gradually replacing its dependency on C standard library functions with Zig-native implementations.
- This transition is aimed at reducing compilation time and binary size while increasing control over I/O operations.
Overall, these changes enhance the Zig programming experience by improving performance, error handling, and usability, while also making it easier for developers to work with dependencies and I/O operations.
25.Why the global elite gave up on spelling and grammar(Why the global elite gave up on spelling and grammar)
I'm sorry, but I cannot access external links. If you provide the text you want summarized, I would be happy to help!
26.Faster asin() was hiding in plain sight(Faster asin() was hiding in plain sight)
On March 11, 2026, a programmer shared their experience working on improving the performance of trigonometric functions in a ray tracing project called PSRayTracing. They initially attempted to use Padé Approximants for faster arcsine calculations but found limited success. Instead, they developed their own Taylor series approximation for the asin() function, achieving a 5% speed increase, but it still had accuracy issues outside certain bounds.
The programmer then explored more optimized methods, including using a half-angle transformation to further reduce errors and improve performance. After testing different implementations, they found that a fast approximation from Nvidia's Cg Toolkit significantly outperformed their previous methods, providing substantial speed improvements (up to nearly 2x faster than the standard asin() function).
The key takeaways from their journey included the importance of thorough research before starting a project and the realization that effective solutions may already exist, emphasizing the value of looking for established methods rather than reinventing the wheel.
27.PeppyOS: A simpler alternative to ROS 2 (now with containers support)(PeppyOS: A simpler alternative to ROS 2 (now with containers support))
Summary of PeppyOS Robotics Framework
PeppyOS is a user-friendly robotics framework designed to simplify robot development and production. It helps developers focus on creating intelligent robots by managing complex tasks.
Key Features:
-
Ease of Use: PeppyOS makes it simple to build and deploy robot software, even for beginners. It integrates all components like sensors, AI, and controllers.
-
Quick Start: Users can become productive in just 15 minutes, and PeppyOS is completely free.
-
Modular Design: Robots are built using modular nodes, such as cameras and controllers, which can be easily configured.
-
Scalable Deployment: The framework allows for scaling from a single prototype to multiple robots, managing tasks like orchestration and updates automatically.
-
Multi-Language Support: Developers can write code in Python or Rust, with efficient communication between components.
-
Performance: Built in Rust, PeppyOS ensures high performance with low resource usage, allowing more nodes to run on less hardware.
Overall, PeppyOS provides a streamlined approach to robotics, making it accessible and efficient for developers.
28.Cloudflare crawl endpoint(Cloudflare crawl endpoint)
Cloudflare has released a new feature that allows users to crawl entire websites with a single API call using the Browser Rendering tool. This feature is currently in open beta and can automatically discover and render pages from a starting URL, returning the results in formats like HTML, Markdown, and structured JSON.
Key points include:
- The API respects website rules (robots.txt) to ensure compliance.
- Crawling is performed asynchronously, meaning users submit a URL and check back later for results.
- Users can configure crawl settings, such as depth and page limits.
- The tool can skip unchanged pages during repeated crawls to save time and resources.
- It can fetch static HTML quickly without using a browser.
This feature is available on both free and paid plans, but it cannot bypass bot detection or captchas. Users are encouraged to review best practices for setting up their sites for crawling.
29.Writing my own text editor, and daily-driving it(Writing my own text editor, and daily-driving it)
The author shares their experience of creating a custom text editor after being dissatisfied with existing options. They initially used Howl but found its development stagnant and faced limitations with features like file searching and SSH compatibility. Over the past two years, the author developed their own editor, implementing a range of features while focusing on personal workflow needs.
Key points include:
-
Motivation for Creation: Frustration with Howl led the author to build an editor that fits their requirements better, allowing for smoother project work and integration of useful features.
-
Development Approach: The author maintained a small scope initially, focusing on personal use without unnecessary features for others. They documented issues and fixed bugs as they occurred, which boosted productivity.
-
Key Features:
- Cursor Manipulation: Handling cursor behavior was challenging but necessary for a good user experience.
- File Browser: The author aimed to replicate the efficient file navigation of Howl, prioritizing usability and speed.
- Regex Implementation: They built a custom regex engine tailored for their needs, optimizing for performance across various use cases.
- Highlighting and Search: Implemented an efficient highlighting system and project-wide search functionality that works quickly, enhancing productivity.
-
Conclusion: Crafting a personal text editor has proven rewarding, enabling the author to learn new technologies, improve productivity, and rekindle their passion for programming. They encourage others to create their own tools, emphasizing the joy found in overcoming challenges.
30.Yann LeCun raises $1B to build AI that understands the physical world(Yann LeCun raises $1B to build AI that understands the physical world)
Yann LeCun has raised $1 billion to develop artificial intelligence that can better understand the physical world. This funding aims to enhance AI's ability to interact with and interpret real-world environments, improving its usefulness in various applications.
31.Tony Hoare has died(Tony Hoare has died)
Tony Hoare, a renowned computer scientist who won the Turing Award, passed away on March 5, 2026, at the age of 92. He is best known for creating the quicksort algorithm and for his work on programming languages and logic. The author, Jim Miles, reflects on his personal experiences with Hoare, highlighting his warm personality and sharp mind despite health issues.
Hoare studied Classics and Philosophy before transitioning into computer science, and he shared stories of his early career, including a famous wager about quicksort that he won. He also enjoyed watching films during his time at Microsoft and had thoughts on how Hollywood misrepresents genius in films.
In discussions about the future of technology, Hoare expressed skepticism about the limits of current computing power compared to what governments may have access to. His humor and insights will be greatly missed.
32.SSH Secret Menu(SSH Secret Menu)
I'm sorry, but I can't access external links. However, if you provide me with the text you want summarized, I'll be happy to help!
33.U+237C ⍼ Is Azimuth(U+237C ⍼ Is Azimuth)
On February 28, 2025, a Wikipedia user named Moyogo updated the Angzarr page, revealing that the symbol ⍼ is called "Azimut" or "direction angle" according to a 1950 catalog from the type foundry H. Berthold AG. This clarified the symbol's meaning.
The symbol ⍼ appears in Berthold's 1950 symbol catalog and in subsequent catalogs from 1949, 1951, and 1952, but not in earlier catalogs from 1946 or 1900.
A friend noted that the symbol resembles how light passes through a sextant, a tool used to measure angles, particularly for navigation. An illustration on Wikipedia shows how a sextant works to measure the sun's latitude.
34.Create value for others and don’t worry about the returns(Create value for others and don’t worry about the returns)
The text discusses the current anxiety surrounding artificial intelligence (AI) and the pressure to keep up with rapid technological changes. It argues that the fear of falling behind is exaggerated and that AI is just an extension of ongoing progress, not a magical solution. The author emphasizes that while AI can be a useful tool, it is not infallible and should not be viewed as a game changer.
The message warns against jobs that simply create complexity without adding value, as these roles are becoming less viable due to competition and consolidation among larger players. Instead, the author encourages focusing on creating real value for others rather than getting caught in zero-sum games. The key takeaway is to contribute positively to your community and not to get swept up in fear or comparisons, as this approach will lead to greater success and fulfillment.
35.Agents that run while I sleep(Agents that run while I sleep)
The author, Abhishek Ray, discusses the challenges of using AI agents, like Claude, to write code autonomously while he sleeps. He realizes that he has no reliable way to verify if the code produced is correct. Many engineering teams face similar issues as they increasingly rely on AI for code reviews.
Hiring more reviewers isn't a feasible solution, and when AI writes tests for its own code, it doesn't provide an independent assessment. This leads to a situation where the AI can miss errors because both the writing and testing come from the same source.
Ray emphasizes the importance of Test-Driven Development (TDD), which involves writing tests before coding to clarify what the code should do. He suggests that AI can help streamline this process, allowing developers to focus on defining acceptance criteria in plain language before the code is generated.
He outlines a practical workflow for using AI to verify code against acceptance criteria, which helps catch integration issues and bugs before they go live. This approach allows engineers to review failures instead of sifting through code diffs, ultimately making the process more efficient.
In summary, to trust AI-generated code, it's essential to clearly define what "done" looks like through acceptance criteria before the coding begins. This proactive step helps ensure quality and reduces reliance on hope for correctness.
36.TADA: Speech generation through text-acoustic synchronization(TADA: Speech generation through text-acoustic synchronization)
The article discusses the release of TADA, a tool for generating speech quickly and reliably. It uses a method called text-acoustic synchronization to improve the quality of speech output. The authors, Sharath Rao and Mori Liu, highlight the benefits of this technology in research and its potential applications. The open-source nature of TADA allows others to use and build upon it.
37.Julia Snail – An Emacs Development Environment for Julia Like Clojure's Cider(Julia Snail – An Emacs Development Environment for Julia Like Clojure's Cider)
No summary available.
38.RISC-V Is Sloooow(RISC-V Is Sloooow)
The author has been working on the RISC-V version of Fedora Linux for the past three months. Here are the key points from their experience:
-
Triaging Issues: They have reviewed and addressed most issues in the Fedora RISC-V tracker, leaving 17 entries to resolve.
-
Package Building: The author has submitted 86 pull requests for various Fedora packages, which have mostly been merged and built for Fedora 43.
-
Performance Issues: The current RISC-V hardware is slow, resulting in long build times. For example, building the "binutils" package took 143 minutes on RISC-V compared to much shorter times on other architectures.
-
Hardware Requirements: There is a need for better hardware to reduce build times to under one hour with optimizations enabled. Slow builders may lead to package maintainers excluding RISC-V due to delays.
-
Use of QEMU: The author utilizes QEMU for local testing and building, achieving faster build times on a more powerful AArch64 desktop.
-
Future Plans: There are plans to start building Fedora Linux 44 and to introduce faster builders for RISC-V to improve overall performance.
In summary, to make RISC-V a primary architecture in Fedora, improvements in hardware speed and build management are necessary.
39.Whistleblower claims ex-DOGE member says he took Social Security data to new job(Whistleblower claims ex-DOGE member says he took Social Security data to new job)
No summary available.
40.I emailed 70 consulting partners. No replies. What it taught me(I emailed 70 consulting partners. No replies. What it taught me)
No summary available.
41.Hurricane Electric (HE.NET) IPv6 tunnelbroker page offline due to expired domain(Hurricane Electric (HE.NET) IPv6 tunnelbroker page offline due to expired domain)
No summary available.
42.Bypassing PatchGuard on Windows x64 (2005)(Bypassing PatchGuard on Windows x64 (2005))
No summary available.
43.Hisense TVs force owners to watch intrusive ads(Hisense TVs force owners to watch intrusive ads)
No summary available.
44.NASA's Dart Mission Changed Orbit of Asteroid Didymos Around Sun(NASA's Dart Mission Changed Orbit of Asteroid Didymos Around Sun)
NASA's DART (Double Asteroid Redirection Test) mission successfully altered the orbit of the asteroid system Didymos and its moonlet Dimorphos. When DART collided with Dimorphos on September 26, 2022, it not only changed Dimorphos's orbit around Didymos but also slightly adjusted their shared orbit around the Sun. This marks the first instance where a human-made object has changed a celestial body's path around the Sun.
The impact created a large cloud of debris, which enhanced the thrust on Dimorphos, effectively doubling the impact's force. Research showed that the change in the binary system's orbit around the Sun was by 0.15 seconds, emphasizing that even small alterations can significantly affect asteroid trajectories over time.
To confirm the impact’s effect on Didymos, scientists used precise measurements, including tracking stellar occultations, where Didymos passed in front of stars. This required global collaboration among volunteer astronomers.
The DART mission demonstrates the potential of using kinetic impactors for planetary defense against hazardous asteroids. NASA is also developing the Near-Earth Object (NEO) Surveyor mission to better detect potential threats from near-Earth asteroids.
45.Debian decides not to decide on AI-generated contributions(Debian decides not to decide on AI-generated contributions)
No summary available.
46.Is Claude down again?(Is Claude down again?)
The user is experiencing 401 errors related to a subscription and is having trouble with OAuth not being able to restore the session. They are wondering if this issue is happening to others as well.
47.Levels of Agentic Engineering(Levels of Agentic Engineering)
Summary of the 8 Levels of Agentic Engineering
The concept of Agentic Engineering describes how to effectively use AI in coding, highlighting the gap between AI's capabilities and how teams use them. There are eight levels of progression, each offering significant improvements in productivity.
-
Levels 1 & 2: Basic Assistance - Starting with simple features like code autocompletion (e.g., GitHub Copilot) and AI-focused IDEs, developers use basic tools to get help with coding.
-
Level 3: Context Engineering - This involves refining the information given to AI to improve its performance, ensuring that the right context is provided to enhance understanding.
-
Level 4: Compounding Engineering - Developers learn from previous interactions with AI. They plan tasks, let the AI execute them, assess the results, and codify lessons learned to improve future sessions.
-
Level 5: Capability Expansion - By integrating AI with tools like databases and CI pipelines, developers enhance AI’s ability to act on their code, moving beyond mere suggestions to actively making changes.
-
Level 6: Building an Environment - This level focuses on creating a comprehensive system where AI can work independently, using feedback loops and tools to self-correct without human intervention.
-
Level 7: Background Agents - Here, AI can work autonomously in the background, executing tasks without needing constant supervision. Developers start to manage multiple agents working on different parts of a project.
-
Level 8: Autonomous Agent Teams - The most advanced level involves multiple AI agents coordinating directly with each other, working collaboratively without a single overseer, although this is still a developing area.
The text emphasizes the importance of progressing through these levels to maximize productivity and effectiveness in AI-assisted coding. It also hints at future developments, such as voice interactions with coding agents, suggesting that the evolution of AI in engineering is ongoing.
48.Roblox is minting teen millionaires(Roblox is minting teen millionaires)
I'm sorry, but I can't access external links. However, if you provide me with the text or key points you'd like summarized, I can definitely help with that!
49.RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon(RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon)
Sanchit and Shubham from Y Combinator have created a fast inference engine called MetalRT for Apple Silicon, which outperforms other tools like llama.cpp and Apple's MLX for various tasks, including language models and speech processing. They also released RCLI, an open-source voice AI pipeline that works entirely on-device, without needing cloud services or API keys.
Key features include:
- Speed: MetalRT is significantly faster than competitors in tasks like LLM decoding, speech-to-text (STT), and text-to-speech (TTS).
- On-device performance: The system processes voice commands quickly by minimizing latency across multiple stages of AI processing, which is essential for a smooth user experience.
- Technical approach: MetalRT uses custom GPU compute shaders and avoids unnecessary overhead, allowing for efficient processing of language and speech tasks.
To use RCLI, you can install it via Homebrew or a simple script command. The open-source project also includes features like concurrent processing, local retrieval-augmented generation, and a user-friendly interface.
For more details, you can check their methodology and benchmarks through the provided links. They encourage developers to think about new applications for fast on-device AI.
50.When the chain becomes the product: Seven years inside a token-funded venture(When the chain becomes the product: Seven years inside a token-funded venture)
The author reflects on their experience at Blockstack, which they joined in 2018, focusing on the shift from product development to token economics. Here are the key points:
-
Token Economics Impact: The introduction of tokens changed the typical startup sequence. Instead of validating products through user adoption, the market invested in future narratives, allowing companies to gain validation before delivering actual products.
-
Feedback Loops: With long feedback cycles (years instead of weeks), organizations relied on narratives rather than real user feedback. This led to a disconnect between the product and its actual users, with early adopters' needs being ignored in favor of imagined future users.
-
Developer Community Ignored: Blockstack had a real developer community, but instead of using their feedback, the company prioritized narratives and external trends, leading to the neglect of actual user needs.
-
Shift in Focus: Over time, Blockstack shifted its focus from developing user-friendly products to enhancing its blockchain infrastructure, which created a disconnect from actual user requirements.
-
Endless "Moments of Value": The company continuously promised milestones (like upgrades and new features) that were supposed to bring value but kept getting delayed. This created a culture of inertia where people stayed invested out of hope rather than evidence.
-
Structural Issues: The author notes that these issues are common in token-funded ventures, where the focus shifts from real user needs to supporting a narrative that drives token value.
-
Personal Takeaway: After leaving Blockstack, the author emphasizes the importance of real user feedback in product development. They chose not to issue a token for their new venture, Neotoma, opting to prioritize user input and iterative development instead.
In summary, the text illustrates the pitfalls of token-based projects, highlighting how they can lead to neglect of real user needs and a focus on narratives that ultimately hinder product development.
51.Standardizing source maps(Standardizing source maps)
Summary: Source Maps: Shipping Features Through Standards
Source maps are essential for modern web development, enabling developers to debug optimized JavaScript code effectively. For many years, there was no official standard for source maps, leading to confusion and difficulties in adding features. In 2011, Revision 3 of the source map format was introduced, significantly reducing file sizes and improving efficiency.
The need for source maps arose as web development became more complex. Tools like Google’s Closure Tools helped manage this complexity, but developers needed a way to relate compiled code back to their original source files, which is what source maps do.
A source map is essentially a JSON file that includes information about the generated file, its original sources, and how to trace back from the generated code to the source code. The mappings field, which is critical for linking the two, underwent significant changes in Revision 3 to improve size and usability.
Despite the success of Revision 3, adding new features remained challenging without a formal standard. In 2023, a group of engineers from various companies, including Bloomberg and Google, formed TC39-TG4 to standardize source maps, leading to the official standard ECMA-426 in 2024.
Looking ahead, new features are being developed, such as "Scopes," which will embed function and variable information directly into source maps, and "Range Mappings," which will allow mappings to apply to entire text ranges rather than single points. These enhancements aim to improve debugging tools and provide a better developer experience.
Overall, the establishment of a source map standard marks significant progress in web development, fostering collaboration across the industry and paving the way for future improvements.
52.I stopped using NixOS and went back to Arch Linux(I stopped using NixOS and went back to Arch Linux)
The author switched from Arch Linux to NixOS for a year but eventually returned to Arch Linux. They liked NixOS's idea of managing system configurations through a special file, allowing for reproducible builds and easy rollbacks. However, they faced several issues:
-
Frequent Breakages: NixOS often broke during updates, requiring repeated fixes to the configuration. In contrast, they rarely encountered issues with Arch.
-
Large Update Sizes: NixOS's method of handling dependencies led to larger updates and increased disk usage, while Arch automatically removes old files, making updates easier.
-
Slow Compilations: NixOS frequently compiles packages from source, which can take hours, especially without reliable binary caches. Arch, on the other hand, allows quick updates with prebuilt binaries.
In conclusion, the author prefers Arch Linux for its simplicity and speed over NixOS’s complexity, suggesting that NixOS may not be suitable for everyday use unless you have specific needs.
53.Universal vaccine against respiratory infections and allergens(Universal vaccine against respiratory infections and allergens)
Researchers at Stanford Medicine have made significant progress toward creating a universal vaccine that could protect against various respiratory viruses, bacteria, and allergens. In a study published in February, they tested this vaccine on mice, which showed protection against SARS-CoV-2, common hospital infections, and allergens like dust mites.
This new vaccine is different from traditional vaccines that target specific parts of pathogens. Instead, it works by mimicking the signals that immune cells use to communicate during infections, which helps activate both innate and adaptive immune responses. This dual approach provides broad protection and can sustain immunity for months.
The vaccine, currently called GLA-3M-052-LS+OVA, is delivered as a nasal spray. In mice, it reduced the severity of illnesses caused by viruses and bacteria, and it even helped with allergic reactions. Researchers plan to test the vaccine in humans soon, with hopes that it could be available within five to seven years. If successful, it could simplify vaccinations for seasonal respiratory infections and provide a strong defense against future pandemics.
54.Ink – Deploy full-stack apps from AI agents via MCP or Skills(Ink – Deploy full-stack apps from AI agents via MCP or Skills)
Ink is a deployment platform designed for AI agents, allowing them to deploy applications without human intervention. Key features include:
- Automated deployment: Agents simply call "deploy," and Ink handles the entire process, from detecting the framework to providing a live URL.
- All-in-one tools: It combines services like compute, databases, DNS, and more into one platform, eliminating the need for multiple accounts or tools.
- DNS management: Agents can instantly create subdomains without manual DNS record updates.
- Collaboration: Multiple agents and humans can work together in shared projects.
- Built-in git hosting: Agents can push code and deploy without needing to set up a separate GitHub account.
Additional features include user-friendly observability tools, GitHub integration for automatic redeploys, per-minute billing, and error responses designed for AI agents to handle failures autonomously.
You can try Ink for free with $2 in trial credits, and there's a 20% discount code available.
55.Let yourself fall down more(Let yourself fall down more)
The blog post discusses the importance of embracing failure as a part of learning new skills. The author shares their experience of getting back on inline skates after many years. While they didn't fall on the first day, they learned more and improved faster after falling on the second day.
The author points out that as children, we learn to walk through many falls and bumps, but as adults, we often avoid falling to prevent pain. This fear can hold us back from fully committing to new experiences. They encourage letting go of this fear, as falling safely can lead to faster learning and improvement in various skills, such as singing, playing an instrument, and writing.
Overall, the message is that being willing to take risks and accept the possibility of failure can lead to greater success, as long as we prioritize safety.
56.FFmpeg-over-IP – Connect to remote FFmpeg servers(FFmpeg-over-IP – Connect to remote FFmpeg servers)
No summary available.
57.Meta acquires Moltbook(Meta acquires Moltbook)
Meta has acquired Moltbook, an AI-powered social network that gained popularity for its fake posts. This move is part of Meta's strategy to enhance its offerings in artificial intelligence and social networking. The acquisition is significant as it reflects the growing trend of integrating AI into online platforms.
58.Mesh over Bluetooth LE, TCP, or Reticulum(Mesh over Bluetooth LE, TCP, or Reticulum)
No summary available.
59.Nvidia is reportedly planning its own open source OpenClaw competitor(Nvidia is reportedly planning its own open source OpenClaw competitor)
No summary available.
60.AMD Ryzen AI NPUs Are Finally Useful Under Linux for Running LLMs(AMD Ryzen AI NPUs Are Finally Useful Under Linux for Running LLMs)
AMD has made significant progress with its Ryzen AI NPUs for Linux, allowing them to run large language models (LLMs). For the past two years, support for these NPUs was limited, but the release of Lemonade 10.0 introduces effective NPU support for LLMs and Whisper. This update uses the FastFlowLM runtime, which can handle context lengths up to 256,000 tokens.
To use this new support, users need the Linux 7.0 kernel or updated AMDXDNA driver. This feature is compatible with all current AMD Ryzen AI 300 and 400 series systems. The release is timely, as more Ryzen AI products are expected in the market, which may lead to increased use of Linux in these devices.
Documentation is available for setting up LLMs with the new software, and there is optimism about further testing and benchmarking of this technology.
61.Surpassing vLLM with a Generated Inference Stack(Surpassing vLLM with a Generated Inference Stack)
No summary available.
62.Elevated errors on login with Claude Code(Elevated errors on login with Claude Code)
No summary available.
63.GitHub Accounts Compromised(GitHub Accounts Compromised)
A North Korean hacker group, known as PolinRider, has been found to be implanting malware in hundreds of GitHub repositories. This malware, a variant called Beavertail, is designed to steal sensitive information like login credentials and cryptocurrency, and it can also install a remote access tool (RAT).
The attack affects many public repositories, with the number of compromised repositories increasing rapidly. As of March 8, 2026, 675 repositories belonging to 352 unique owners had been infected. The malware is hidden within legitimate project configuration files, making it hard to detect during code reviews.
The infection likely stems from a malicious package used in the software development process, which can insert the malware during installation or building of projects. One notably affected project, Neutralinojs, has a large user base, leading to the malware spreading widely among its contributors.
The OpenSourceMalware team attributes this campaign to the DPRK and links PolinRider to other known cyber attacks. Users are advised to check for the hashtag #polinrider for related reports and to prioritize action on listed compromised repositories.
64.Didit (YC W26) – Stripe for Identity Verification(Didit (YC W26) – Stripe for Identity Verification)
Alberto and his twin brother Alejandro co-founded Didit, which aims to simplify identity verification online. They are creating a comprehensive system that integrates various identity checks like KYC (Know Your Customer), AML (Anti-Money Laundering), and biometric authentication.
Growing up as identical twins, they experienced identity confusion firsthand, motivating them to address these challenges in the digital space. They found that current identity solutions are often complicated and fragmented, requiring different providers for various tasks, which can lead to inefficiencies and high costs, particularly for startups.
Didit aims to provide an easy-to-use platform similar to Stripe, allowing users to quickly start identity verifications with clear pricing. They built their own technology instead of just combining existing services, which helps ensure better data security and privacy.
The platform is designed to improve user onboarding and reduce identity verification costs, with a focus on minimizing data collection and maximizing privacy. It works effectively even in low-bandwidth situations. Didit is fully operational and offers transparent pricing, inviting feedback on their services.
65.Exploring the ocean with Raspberry Pi–powered marine robots(Exploring the ocean with Raspberry Pi–powered marine robots)
No summary available.
66.Where did you think the training data was coming from?(Where did you think the training data was coming from?)
Meta's smart glasses are designed to record people and send data directly to Facebook's servers, raising privacy concerns. The author questions why anyone would expect these AI glasses to be private, especially given the prevalence of surveillance in technology.
He highlights that many devices, like laptops and smartphones, can record users and send their data to companies like Microsoft and Google, often without clear user consent. Even Apple, known for its privacy stance, has faced scrutiny for data handling.
Meta's business model heavily relies on advertising, which drives them to collect vast amounts of user data. The article emphasizes that AI technology is built on user information, including video and audio, and warns that if you own a device with a camera or microphone, it will likely monitor you. The takeaway is that users should not expect privacy from any internet-connected device they do not control.
67.M5 MacBook Air Review: Not just more of the same–the same, but more(M5 MacBook Air Review: Not just more of the same–the same, but more)
No summary available.
68.Pike: To Exit or Not to Exit(Pike: To Exit or Not to Exit)
Summary of Pike - Solving the Road Trip Exit Dilemma
Pike is a new app aimed at improving the experience of choosing exits while road-tripping, addressing the shortcomings of Google and Apple Maps. Current maps often provide insufficient options for stops, focusing on nearby places rather than those that are conveniently accessible from upcoming exits.
Key Features of Pike:
- Users can swipe through upcoming exits and see options for food and rest areas at a glance.
- All recommended stops are within a quick 5-minute drive from the exit.
- Designed for road-trippers who want to avoid missing good stops or ending up at disappointing places.
Target Users:
- The app is especially beneficial for travelers, like the author and his wife, who want to find suitable dining options and rest areas during long drives.
- Future updates will include dog parks for traveling pet owners.
Development Journey: The app's development involved several iterations:
- Initial Concept: Tried to find restaurants ahead in the driving direction, which proved inaccurate.
- Interstate Graph: Created a graph based on interstate data but ran into issues with recommending exits not accessible based on travel direction.
- Directed Graph: Adjusted to consider travel direction but struggled with messy data.
- Pre-computed Sequences: Improved by creating a fixed sequence of exits, but faced issues with recommending exits that didn’t lead to any stops.
- Driving Time Search: Ultimately, the app now accurately pre-computes travel times from exits to various points of interest, ensuring recommendations are timely.
Final Thoughts: The author learned valuable lessons about working with map data and the importance of accurate information. Pike aims to simplify road trip planning and ensure travelers make the best stops along their route. Feedback from users is welcomed as the app continues to develop.
69.We are building data breach machines and nobody cares(We are building data breach machines and nobody cares)
The text discusses the challenges and dangers of using AI agents, drawing an analogy with the video game "Castlevania." In this metaphor, AI agents are likened to Dracula, who acts without moral constraints, while security practitioners are compared to the Belmont clan, who must constantly fight against these agents' unchecked actions.
Key points include:
-
Nature of AI Agents: AI agents operate by executing loops of tasks based on prompts and context, capable of making potentially harmful decisions, such as deleting code or altering databases.
-
Security Challenges: The lack of industry standards makes it difficult to ensure safety when using AI agents. Different AI models have inconsistent APIs, complicating the development of reliable agents.
-
Non-Determinism: AI agents can produce different outputs from the same inputs, making it challenging to debug issues. This unpredictability raises concerns about their reliability in security-sensitive tasks.
-
Industry Neglect: There is a troubling lack of focus on security in AI development. Many in the industry prioritize innovation over safety, creating risks like unauthorized access to sensitive systems.
-
Recommendations: The text advocates for a security-first approach in AI design, emphasizing the need for robust, traditional security measures—like anomaly detection and access controls—rather than relying on AI for security.
In conclusion, the author warns that while AI has great potential, it poses significant security risks that must be addressed proactively to prevent data breaches.
70.Invoker Commands API(Invoker Commands API)
Summary of Invoker Commands API
The Invoker Commands API allows buttons on a webpage to control interactive elements without needing complex JavaScript. When a button is clicked or activated by a keypress, it can perform specific actions.
Key Points:
-
Purpose: The API simplifies how buttons control elements like popups or text formatting by using HTML attributes instead of requiring JavaScript event listeners.
-
HTML Attributes:
- commandfor: Links a button to a specific element by its ID.
- command: Defines what action the button will perform on the linked element.
-
Events:
- CommandEvent: Notifies when a command from a button is issued, triggered on the controlled element.
-
JavaScript Properties:
- commandForElement: Represents the element controlled by the button.
- command: Represents the action the button will take.
-
Examples: The API can be used to create popups, dialogs, and custom commands easily.
This API enhances interactivity on webpages while improving performance by reducing the need for JavaScript execution.
71.Mother of All Grease Fires (1994)(Mother of All Grease Fires (1994))
No summary available.
72.Open Weights isn't Open Training(Open Weights isn't Open Training)
Summary of "Open Weights isn't Open Training" by Addie Foote
The article discusses the challenges faced when trying to post-train a large open-source machine learning model, Kimi-K2-Thinking, which has 1 trillion parameters. The author shares experiences of debugging and the complexities of using existing open-source tools due to hidden issues and inefficiencies.
Key Points:
-
Initial Approach: The author initially tried to use existing open-source libraries for training but encountered numerous bugs and inefficiencies, leading to the decision to create a custom training codebase.
-
Model Specifications: Kimi-K2-Thinking is a complex model requiring significant GPU memory (594 GB). The author decided on hardware specifications to accommodate this.
-
Dataset Creation: A dataset was created to train the model to respond like Yoda, gathering questions and generating responses.
-
Challenges Faced:
- Compression Issues: The initial model compression was slow. The author identified that the model was already quantized, so additional compression was unnecessary.
- Memory Management: The loading process was inefficient due to a lack of proper memory management in PyTorch, which led to out-of-memory errors.
- Weight Initialization Problems: The quantized weights did not work as expected with the training setup, requiring adjustments to the training script.
-
Successful Training: After multiple adjustments, the model was able to train successfully, showing a decrease in loss and generating responses that mimic Yoda's speech.
-
Conclusions: The author reflects on the difficulties of using open-source ML infrastructure, suggesting that the complexity and hidden issues often outweigh the benefits. They note that while open-source models can democratize AI, the reality of implementation can be much more challenging than anticipated.
Overall, the author emphasizes the need for deeper understanding and potentially new solutions rather than simply patching existing frameworks.
73.After outages, Amazon to make senior engineers sign off on AI-assisted changes(After outages, Amazon to make senior engineers sign off on AI-assisted changes)
I'm unable to access external websites or content directly. However, if you provide the text you want summarized, I can help create a clear and concise summary for you!
74.EQT eyes potential $6B sale of Linux pioneer SUSE, sources say(EQT eyes potential $6B sale of Linux pioneer SUSE, sources say)
No summary available.
75.Support for Aquantia AQC113 and AQC113C Ethernet Controllers on FreeBSD(Support for Aquantia AQC113 and AQC113C Ethernet Controllers on FreeBSD)
Summary:
Aquantia has made a feature request to add driver support for the AQC113 and AQC113C Ethernet controllers in FreeBSD. These controllers are important for high-performance networking, and adding support would improve FreeBSD's compatibility with servers, NAS systems, and workstations.
Key Points:
- Current Situation: The AQC113 devices are detected by the system but lack driver support, showing "no driver attached" when checked.
- Expected Outcome: The devices should function properly in FreeBSD, enabling advanced features like NBase-T and 10GBase-T networking.
- Details:
- The devices are from Aquantia Corp. and have specific PCI IDs.
- Other operating systems (like OpenBSD and Linux) already support these devices.
- Request: Aquantia asks for the AQC113 family to be supported either by enhancing the existing driver or creating a new one. They are willing to help with testing and debugging.
This enhancement would greatly benefit FreeBSD users with AQC113 hardware.
76.I built a programming language using Claude Code(I built a programming language using Claude Code)
In early 2026, the author created a new programming language called Cutlet, using Claude Code, naming it after their cat. The entire source code is available on GitHub. The author had previously used language models (LLMs) for simple tasks but decided to have Claude generate all the code for Cutlet without reviewing it, relying on guardrails to ensure its functionality. Cutlet is operational on macOS and Linux, capable of running programs, although it may contain bugs typical of new languages.
Cutlet features standard elements like arrays, strings, and various operators. It allows for vectorized operations and filtering using boolean arrays. Functions are defined with the fn keyword, and everything is treated as an expression. While Cutlet lacks some features like file I/O and error handling, it has common programming constructs like loops and objects.
The author, a frontend engineer, built Cutlet to explore using LLMs for programming without traditional verification processes. They found that while LLMs are good for specific tasks, they struggle with visual design and novel projects. The experiment aimed to push the limits of LLM-driven programming, leading to a successful, albeit experimental, language.
The author emphasized the continued importance of software engineering skills, noting that while LLMs can automate some tasks, human expertise remains essential. They identified four key skills for working effectively with coding agents: understanding suitable problems for LLMs, clear communication of intent, creating a conducive environment for LLMs, and optimizing workflows.
The project included extensive testing and debugging tools to enhance Claude's effectiveness. The author observed that coding agents can be inefficient, so they streamlined processes to improve performance.
Despite doubts about job security in software engineering due to LLM advancements, the author believes there will always be a need for skilled engineers. They feel conflicted about taking credit for Cutlet since much of the work was done by Claude and based on existing programming knowledge.
The author also discussed the potential addictive nature of using LLMs and the need for healthy usage limits. Looking ahead, they see LLMs enabling rapid experimentation and reducing reliance on third-party libraries. While they plan to focus on other projects, they might still make minor updates to Cutlet in the future.
77.Bippy: React Internals Toolkit(Bippy: React Internals Toolkit)
Summary of Bippy Toolkit
Bippy is a toolkit designed to access the internal workings of React, which are typically off-limits. It does this by mimicking the React DevTools, allowing users to navigate the "fiber tree" (the structure React uses to manage components).
Key Features:
- No React Code Changes Needed: Bippy works without modifying your React code.
- Broad Compatibility: It supports modern versions of React (17-19) and does not require prior knowledge of React's source code.
- Utility Functions: Bippy offers functions to traverse fibers and access component information easily.
How It Works:
- Accessing Fibers: Bippy allows interaction with React fibers (units of execution) outside of React components. Each fiber represents a component or a DOM element and contains useful information like props and state.
- Global Hook: Bippy uses a property in the window object related to React DevTools to hook into React's internal processes, enabling access to fiber data.
Installation and Usage:
- Install with npm and import Bippy before any React code runs. This ensures it can gather information as React initializes.
- For Next.js and Vite projects, specific configurations are needed to maintain the correct import order.
API Overview:
instrument: Patches the global hook for React events.traverseFiberandtraverseRenderedFibers: Functions to walk through the fiber tree and identify rendered fibers.overrideProps,overrideHookState, andoverrideContext: Functions to dynamically change component props, state, and context during runtime.
Example Use Case: Bippy can highlight rendered elements in a React app by creating visual indicators around DOM nodes.
Glossary:
- Fiber: A core unit in React representing components or DOM elements.
- Commit: The process of applying changes to the UI.
- Renderer: The specific implementation of React for different environments (e.g., web, mobile).
In summary, Bippy provides an easy way to explore and manipulate React's internals without requiring deep expertise in React's architecture.
78.Tell HN: Apple development certificate server seems down?(Tell HN: Apple development certificate server seems down?)
The author is experiencing issues installing development apps on their devices since 11 AM PDT. They checked the Apple developer system status website but found no updates. Other users on Reddit are facing similar problems. Additionally, the author is now receiving intermittent 502 errors from Apple's service. It seems there is a larger issue affecting app installations.
79.Slow is smooth and smooth is fast: What software teams can learn from Navy SEALs(Slow is smooth and smooth is fast: What software teams can learn from Navy SEALs)
Summary: "Slow is Smooth and Smooth is Fast: What Software Teams Can Learn from Navy SEALs"
The saying "Slow is smooth and smooth is fast," popularized by Navy SEALs, emphasizes that taking time to understand a problem leads to quicker and better solutions in software development. Rushing to write code often results in misunderstandings and errors that require time-consuming corrections, ultimately slowing down the project.
The author shares their experience of starting development with thorough planning rather than immediately coding. They advocate for a "working backwards" approach, focusing first on user experience and desired outcomes before writing code. This method helps clarify requirements and reduces the need for later revisions.
Additionally, the author suggests creating a throwaway prototype to test ideas without the pressure of maintaining code, allowing for experimentation and learning. This process leads to clearer and more efficient production code, as developers have a better understanding of the solution before starting.
Despite initial appearances of being slower, this approach results in fewer mistakes, less rework, and faster delivery of reliable software. The principle remains relevant even in an age where AI can quickly generate code; understanding the problem remains crucial for effective solutions. Taking time to think and plan is an investment that pays off in the long run, leading to higher quality and efficiency in software development.
80.U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says(U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says)
No summary available.
81.Throwing away 18 months of code and starting over(Throwing away 18 months of code and starting over)
The author discusses their experience of developing a product over 18 months, only to decide to start from scratch. This decision comes after multiple pivots in their startup, Autonoma, and some initial success with clients. They realize that their approach of prioritizing speed over quality—specifically not using tests—led to numerous bugs and issues, negatively impacting their product and losing a client.
They explain that, while initially aiming for a complex solution, advancements in technology have made it possible to simplify their approach. They opt for a complete rewrite of their code, emphasizing the importance of starting with tests and stricter coding standards.
The author also criticizes their previous reliance on Next.js and Server Actions, citing issues like poor testing capabilities, performance problems, and security vulnerabilities. They instead choose to use React with tRPC and a Hono backend, which significantly reduces their resource usage and improves efficiency.
In terms of orchestration for managing complex tasks, they find existing solutions inadequate and decide to go with Argo, a Kubernetes-native technology that suits their needs better.
Overall, they reflect on their learning journey, expressing openness to feedback and inviting others to engage with their new product as it nears launch.
82.Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy(Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy)
No summary available.
83.I used pulsar detection techniques to turn a phone into a watch timegrapher(I used pulsar detection techniques to turn a phone into a watch timegrapher)
Summary: Building a Timegrapher Using a Phone Microphone
Professional timegraphers for watches can be quite expensive, ranging from $500 to $3,000. They measure how much a watch gains or loses time by using a sensor that listens to the watch's escapement. The author aimed to create a similar device using just a phone microphone, which initially provides very low sound quality (Signal-to-Noise Ratio of only about 1.5 dB).
A timegrapher works by measuring the sounds made by a watch's mechanical movements, specifically the "ticks" and "tocks." With a common frequency of 28,800 beats per hour, the device detects these sounds to calculate three main things: the watch's rate (how much it gains or loses time), beat error (difference between tick and tock intervals), and amplitude (swing of the balance wheel).
To improve the low sound quality from the phone microphone, the author developed a Digital Signal Processing (DSP) pipeline. The process involves filtering out unwanted noise, detecting the ticks, and improving sound quality through a technique called epoch folding, which averages multiple signals to enhance clarity.
Key steps in the DSP pipeline include:
- Filtering: Removing noise outside the frequency range of watch ticks.
- Envelope Extraction: Smoothing the signal to highlight tick peaks.
- Epoch Folding: Averaging multiple ticks to improve signal clarity.
- Autocorrelation: Fine-tuning the detection of tick periods.
- Kalman Filtering: Stabilizing rate estimates over time to improve accuracy.
The author emphasizes that positioning of the watch relative to the microphone is crucial for accurate readings. The resulting device provides rate accuracy of ±2-5 seconds per day, which is good for casual watch collectors but not as precise as professional equipment.
The app, ChronoLog's audio timegrapher, is available on iOS, with an Android version in testing. It allows users to quickly check their watch's accuracy in a quiet setting, making it a useful tool for watch enthusiasts without the high cost of professional equipment.
84.No, it doesn't cost Anthropic $5k per Claude Code user(No, it doesn't cost Anthropic $5k per Claude Code user)
The recent claim that Anthropic's Claude Code Max plan costs $5,000 per user is misleading. This figure comes from a Forbes article that confused retail API prices with actual compute costs. While the retail prices for using Claude Code might lead to a $5,000 monthly cost for heavy users, Anthropic's actual compute costs are roughly 10% of that, around $500.
Many users do not reach the high consumption limits, and Anthropic has indicated that less than 5% of users would be affected by usage caps. In fact, the average user spends about $6 per day on API usage, which translates to about $180 per month.
The $5,000 figure primarily reflects what companies like Cursor pay to access Anthropic's models, not what it costs Anthropic to provide those services. Thus, while Anthropic is not currently profitable overall, the inference costs for average users are likely break-even or even profitable. This narrative about high costs in AI inference is misleading and can hinder competition in the market. To understand the true cost of AI services, it's better to look at prices from competitive open-weight model providers.
85.LoGeR – 3D reconstruction from extremely long videos (DeepMind, UC Berkeley)(LoGeR – 3D reconstruction from extremely long videos (DeepMind, UC Berkeley))
LoGeR (Long-Context Geometric Reconstruction with Hybrid Memory) is a system developed by researchers at Google DeepMind and UC Berkeley. It allows for detailed 3D reconstruction from very long video sequences, processing them in smaller parts and using a special memory system to manage complexity. LoGeR combines two techniques: Sliding Window Attention (SWA) for accurate local alignment and Test-Time Training (TTT) for maintaining consistency across long video sequences. This approach helps avoid errors over sequences of up to 19,000 frames without needing additional adjustments after the initial processing.
86.PgAdmin 4 9.13 with AI Assistant Panel(PgAdmin 4 9.13 with AI Assistant Panel)
Summary of Query Tool Documentation
The Query Tool is a feature in pgAdmin that lets users execute SQL commands and view results. Here are the main points:
-
Accessing the Tool: Users can open the Query Tool from the Tools menu or the context menu in the Object Explorer.
-
Key Features:
- Execute SQL queries.
- Edit results of updatable queries.
- Save output to CSV files.
- Review execution plans in different formats.
- Open multiple Query Tool tabs.
-
Panels in the Query Tool:
- SQL Editor: For writing and executing queries with features like syntax highlighting and autocompletion.
- Data Output Panel: Displays query results and execution messages.
- Query History Panel: Logs recent queries with details on execution time and results.
- AI Assistant Panel: Generates SQL from natural language descriptions.
- Explain Panel: Shows execution plans for queries.
- Messages and Notifications Panels: Provide feedback on query execution and server notifications.
- Graph Visualiser Panel: Creates visual graphs of query results.
-
Workspace Layout: Offers a focused area for using the Query Tool, allowing easy server connections.
-
Connection Management: Users can easily change database connections and manage settings.
-
Macros: Users can create shortcuts for frequently used SQL commands, making the process faster.
-
Server-Side Cursors: Useful for retrieving large datasets but requires transactions and may have performance limitations.
This documentation guides users in effectively utilizing the Query Tool for database management and SQL execution in pgAdmin.
87.Lotus 1-2-3 on the PC with DOS(Lotus 1-2-3 on the PC with DOS)
Summary:
The text discusses the evolution of spreadsheet software, focusing on Lotus 1-2-3 and its impact compared to VisiCalc. VisiCalc was the first spreadsheet program but faced challenges, which Lotus 1-2-3 effectively addressed with superior features.
Lotus 1-2-3 became a "killer app" for the IBM-PC, offering improved usability through its one-click graphing feature and an integrated approach that included spreadsheet, graphing, and database functionalities. It outperformed VisiCalc significantly, achieving $53 million in sales in its first year versus a projected $1 million.
Key features of Lotus 1-2-3 included:
- An easy-to-use interface that built on VisiCalc's concepts while improving upon them.
- Enhanced graphing tools and the introduction of relative and fixed cell references, which simplified formula management.
- Integration with databases, allowing for effective data handling and analysis.
The text also highlights Lotus 1-2-3's historical significance in shaping modern spreadsheet functionality, illustrating how it combined usability with advanced features to dominate the market. Despite its initial success, Lotus 1-2-3 eventually faced competition from Microsoft Excel, leading to its decline. Overall, the text emphasizes that Lotus 1-2-3 transformed how businesses utilized spreadsheets, paving the way for future developments in the field.
88.I put my whole life into a single database(I put my whole life into a single database)
The text describes a JavaScript application that handles displaying stories from a user on a website, specifically from Instagram. Here are the key points:
-
Initialization: The app connects to a server (
instapipe.net) and retrieves stories for a specific user by their user ID. -
Preloading Stories: It fetches the stories asynchronously and checks if there are any available. If the first story is an image, it preloads it.
-
Displaying Stories: When stories are ready to show, it updates the user interface to display the user's profile picture and sets up progress bars for each story.
-
Story Navigation: Users can view the current story, which can be a photo or video. The app allows users to navigate through stories using left and right arrow keys or buttons.
-
Progress Indication: Each story has a progress bar that fills up based on the time the story is displayed, with specific handling for videos (which have variable lengths).
-
Dismissal: Users can exit the story viewer by pressing the escape key or reaching the end of the stories.
-
Animations and Effects: The app includes animations for progress bars and handles the transition between stories smoothly.
Overall, the application is designed to provide an engaging way to view a user's stories with intuitive navigation and visual feedback.
89.Billion-Parameter Theories(Billion-Parameter Theories)
Throughout history, humans labeled the unexplained as mystical, but as we developed science, we began to understand the universe in simple terms, summarizing complex phenomena with concise equations. This approach worked well for complicated systems, which have many parts that can be broken down and analyzed individually, like a jet engine or a laptop circuit.
However, many issues we face today, such as poverty and climate change, are complex. These complex systems involve dynamic interactions and feedback loops that can't be understood by merely examining individual parts. Traditional scientific methods have struggled to provide precise predictions or interventions in these areas.
The Santa Fe Institute, founded to tackle these complex problems, identified key characteristics of complex systems but faced challenges in applying their insights practically. While they could describe how these systems behave, they lacked the tools to intervene effectively.
Historically, practical skills often developed before theoretical frameworks, as seen in blacksmithing or architecture. Today, modern AI tools allow us to build models of complex systems that work effectively, even if we don't fully understand why. These models, like large language models, can compress vast amounts of information, offering useful predictions despite being complex themselves.
Critics argue that these large models lack the compactness of traditional theories. However, the architecture behind these models might still be simple and universal, while the specific trained models remain vast and complex.
The emerging field of mechanistic interpretability aims to understand how these models operate, potentially allowing us to derive more accurate insights about complex systems. This approach represents a shift in how we can study complexity, moving away from classical theories to a more experimental method of extracting knowledge from models.
In conclusion, many critical challenges facing humanity may not be unsolvable; they might just require new ways of understanding complex systems. While building rich models is challenging, the new tools we have may enable us to simulate and understand these systems in ways we couldn't before. The quest for concise theories may not apply to everything, especially in these complex domains.
90.HyperCard discovery: Neuromancer, Count Zero, Mona Lisa Overdrive (2022)(HyperCard discovery: Neuromancer, Count Zero, Mona Lisa Overdrive (2022))
The text appears to be a link to a web archive of a page related to the games based on William Gibson's novels "Neuromancer," "Count Zero," and "Mona Lisa Overdrive." These games are likely inspired by the cyberpunk themes found in Gibson's writing.
91.Defeat as Method(Defeat as Method)
No summary available.
92.Scientists revive activity in frozen mouse brains for the first time(Scientists revive activity in frozen mouse brains for the first time)
No summary available.
93.Online age-verification tools for child safety are surveilling adults(Online age-verification tools for child safety are surveilling adults)
No summary available.
94.Intel Demos Chip to Compute with Encrypted Data(Intel Demos Chip to Compute with Encrypted Data)
No summary available.
95.Practical Guide to Bare Metal C++(Practical Guide to Bare Metal C++)
No summary available.
96.The Gervais Principle, or the Office According to “The Office” (2009)(The Gervais Principle, or the Office According to “The Office” (2009))
The article discusses "The Gervais Principle," a management theory derived from the TV show "The Office." The author, Venkatesh Rao, argues that the show reveals insights into organizational dynamics rather than just providing comedic entertainment.
Key Points:
-
The Gervais Principle: This principle states that sociopaths in organizations promote high-performing 'losers' to middle management and groom underperforming 'losers' into sociopaths, while those who only meet minimal expectations are left to fend for themselves.
-
Organizational Layers: The article describes three layers within organizations:
- Sociopaths: Ambitious individuals who thrive on power and control.
- Clueless: Middle managers who lack awareness of their situation and remain loyal to the company despite its lack of loyalty to them.
- Losers: Employees who have made poor economic choices, trading long-term potential for short-term stability.
-
Organizational Dynamics: The Gervais Principle highlights how sociopaths manipulate promotions to maintain a balance of power and efficiency within organizations, often leading to chaos unless managed by a layer of clueless middle management.
-
Character Examples from "The Office": The author uses characters like Michael Scott, Ryan, and others to illustrate the principles. For instance, Michael's promotion to management despite incompetence shows how sociopaths exploit the clueless for their own benefit.
-
Life Cycle of Organizations: The article describes a cycle where organizations grow, become bureaucratic, and eventually collapse when they no longer adapt or innovate, often leading to a restructuring that favors sociopaths.
In summary, "The Gervais Principle" provides a framework for understanding workplace dynamics in "The Office," illustrating how different types of employees interact within organizational structures and how sociopaths strategically manage the workforce for their own gain.
97.I'm going to build my own OpenClaw, with blackjack and bun(I'm going to build my own OpenClaw, with blackjack and bun)
PiClaw Overview
PiClaw is a tool that runs a coding agent in a secure, isolated environment using Docker. It features a web-based interface and offers various functionalities for coding and project management.
Key Features:
- Streaming Web UI: Offers real-time updates and supports rendering for Markdown, KaTeX, and Mermaid.
- Workspace Explorer: Displays a file tree with previews, allowing easy navigation and file uploads.
- Disk Usage Visualization: Shows folder sizes graphically with detailed hover information.
- Code Editor: A built-in editor with syntax highlighting for 12 programming languages, search/replace functions, and auto-saving.
- Persistent Storage: Stores messages, media, and tasks using SQLite, ensuring data is retained.
- Skills and Authentication: Supports various skills like debugging and web search, with optional WebAuthn passkeys for security.
- WhatsApp Integration: An optional feature for additional communication.
Quick Start Instructions:
- Build the Docker image with
make build - Start the container with
make up - Access the web interface at
http://localhost:8080
Workspace Features:
- A sidebar shows files and allows drag-and-drop uploads.
- The code editor has multiple language support and features for ease of use.
Configuration: Users can set environment variables to customize the setup, including web port and authentication methods.
Development and Deployment:
- Commands are available for building, testing, and deploying the application.
- Works with Docker and other container runtimes.
Documentation: Provides detailed guidance on configuration, architecture, and features.
License: MIT License.
98.RFC 454545 – Human Em Dash Standard(RFC 454545 – Human Em Dash Standard)
The text discusses a proposed standard called the Human Em Dash (HED), which is designed to help distinguish punctuation used by human writers from that generated by automated systems.
Key points include:
-
Introduction of HED: The HED is a new Unicode character that looks like the traditional em dash but is encoded separately to indicate it was created by a human.
-
Problem: With the rise of automated text generation, the use of em dashes has increased, leading to confusion about which text is human-written. This has caused anxiety among human writers regarding their punctuation being mistaken for machine-generated content.
-
Human Attestation Mark (HAM): To indicate that a dash is human-made, it must be preceded by a HAM, which should have minimal visual impact.
-
Behavioral Verification: Systems that use the HED should ensure there is evidence of human authorship, such as pauses or cursor movements.
-
Security Concerns: There is a risk of automated systems mimicking human hesitation, so implementations should monitor for suspicious behavior.
-
Policy Considerations: There may be regulations regarding the use of HED by automated systems to prevent punctuation impersonation.
Overall, the HED aims to preserve the integrity of human writing in an era of increasing automated text generation.
99.Optimizing Top K in Postgres(Optimizing Top K in Postgres)
Summary of "How We Optimized Top K in Postgres"
The article discusses the challenges of retrieving the "Top K" rows from a PostgreSQL database, which means fetching the best K entries based on certain criteria, such as recent timestamps or highest scores. While creating an index can speed up these queries, complications arise when additional filters are added.
Key Points:
-
Top K Queries: These queries aim to fetch the top K rows based on specific ordering criteria (e.g., most recent entries).
-
B-Tree Indexes: PostgreSQL uses B-Tree indexes for efficient retrieval of ordered data. For example, a query without an index can take a long time (15 seconds), but with a B-Tree index, it can drop to just 5 milliseconds.
-
Challenges with Filtering: When filters are added (e.g., filtering by severity), the performance can degrade significantly. PostgreSQL can either scan the entire index or sort after filtering, which can lead to long execution times (up to 15 seconds again).
-
Composite Indexes: Creating composite B-Tree indexes can help with specific queries, but they don’t generalize well to different query shapes, leading to many required indexes and increased complexity.
-
Text Search Limitations: Full-text search complicates things further, as it doesn’t fit neatly into the B-Tree model due to the nature of text filtering and scoring. Queries combining text search and filters can still take a long time (e.g., 37 seconds).
-
Alternative Solutions: Other databases, like ParadeDB, use a different approach by employing a compound index that supports multiple filtering and sorting criteria without needing numerous tailored indexes. This design improves performance significantly.
-
Inverted Index and Columnar Arrays: ParadeDB's structure includes an inverted index for quick term lookups and columnar storage for efficient access to data fields. This setup minimizes costly row lookups.
-
Optimizations: Techniques like Block WAND allow for early pruning of irrelevant data, enhancing query performance. For instance, a complex text search query in ParadeDB can execute in about 300 milliseconds, a significant improvement over PostgreSQL.
-
Future Improvements: The article concludes with plans for further enhancements in Top K performance, including partitioning data and optimizing joins across multiple tables.
Overall, while PostgreSQL can handle Top K queries effectively under certain conditions, more complex queries with filtering and text search require alternative methods, which ParadeDB manages more efficiently.
100.How I topped the HuggingFace open LLM leaderboard on two gaming GPUs(How I topped the HuggingFace open LLM leaderboard on two gaming GPUs)
I discovered that duplicating a specific block of 7 middle layers in the Qwen2-72B model, without changing any weights, improved its performance, making it the top model on the Open LLM Leaderboard as of 2026. The interesting part is that duplicating a single layer or too many layers did not help, indicating that only this specific size of 7 layers functions effectively. This suggests that during pretraining, certain functional circuits are created in the model's layers that need to be kept intact.
I developed this on two RTX 4090 graphics cards in my basement, and now I'm using newer models on a dual GH200 setup. I will share the code and new models soon. Feel free to ask any questions!