{"schema": "intel.signal.v1", "ts": "2026-01-16T03:18:26Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "A New Strategy for Verifying Reach-Avoid Specifications in Neural Feedback Systems", "link": "https://arxiv.org/abs/2601.08065", "summary": "arXiv:2601.08065v1 Announce Type: new Abstract: Forward reachability analysis is the predominant approach for verifying reach-avoid properties in neural feedback systems (dynamical systems controlled by neural networks). This dominance stems from the limited scalability of existing backward reachability methods. In this work, we introduce new algorithms that compute both over- and under-approximations of backward reachable sets for such systems. We further integrate these backward algorithms with established forward analysis techniques to yield a unified verification framework for neural feedb", "tags": [], "hash": "6b82adf74791abcc"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T03:18:26Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Resisting Manipulative Bots in Memecoin Copy Trading: A Multi-Agent Approach with Chain-of-Thought Reasoning", "link": "https://arxiv.org/abs/2601.08641", "summary": "arXiv:2601.08641v1 Announce Type: new Abstract: The launch of \\$Trump coin ignited a wave in meme coin investment. Copy trading, as a strategy-agnostic approach that eliminates the need for deep trading knowledge, quickly gains widespread popularity in the meme coin market. However, copy trading is not a guarantee of profitability due to the prevalence of manipulative bots, the uncertainty of the followed wallets' future performance, and the lag in trade execution. Recently, large language models (LLMs) have shown promise in financial applications by effectively understanding multi-modal data ", "tags": [], "hash": "b7e192c0680fc8a6"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T03:18:26Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Hierarchical Sparse Plus Low Rank Compression of LLM", "link": "https://arxiv.org/abs/2601.07839", "summary": "arXiv:2601.07839v1 Announce Type: cross Abstract: Modern large language models (LLMs) place extraordinary pressure on memory and compute budgets, making principled compression indispensable for both deployment and continued training. We present Hierarchical Sparse Plus Low-Rank (HSS) compression, a two-stage scheme that (i) removes the largest-magnitude weights into a sparse matrix S and (ii) applies a recursive Hierarchically Sparse Separable (HSS) low-rank factorisation to the dense residual matrix. A recursive rank-reducing strategy and a reverse Cuthill-Mckee (RCM) permutation are introduc", "tags": [], "hash": "de3044ac0bb2d06e"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T05:03:28Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": [], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T10:03:32Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": [], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T10:18:32Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": [], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T14:18:33Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": [], "hash": "da1d8dbca2875c24"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T14:48:33Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": [], "hash": "8fa2135e6fbdb20d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T15:21:18Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["security", "standards"], "hash": "d5727034f062b1c5"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T15:33:28Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "023f0e56b9305fe3"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T15:33:30Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T15:48:29Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "56f8921b743bbd0d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T15:48:30Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T16:03:28Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "e31d6b21e099a525"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T16:48:30Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "6afbdf921ff6b676"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T17:18:30Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "ed326d997e5c7c7d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T17:33:31Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Show HN: I built a text-based business simulator to replace video courses", "link": "https://www.core-mba.pro/", "summary": "<p>I am a solo developer, and I built Core MBA because I was frustrated with the \"video course\" default in business education.<p>I wanted to build a \"compiler for business logic\"\u2014a tool where I could read a concept in 5 minutes and immediately test it in a hostile environment to see if my strategy actually compiles or throws a runtime error.<p>The project is a business simulator built on React 19 and TypeScript.<p>The core technical innovation isn't just using AI; it's the architecture of a closed loop between a deterministic economic engine and a generative AI validation layer.<p>The biggest ", "tags": ["docs", "templates"], "hash": "ed326d997e5c7c7d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T17:33:32Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T17:48:33Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T18:48:33Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T19:03:33Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T19:33:34Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T19:48:35Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T20:03:34Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T20:18:35Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T20:48:35Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T21:18:37Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T21:33:37Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T22:03:36Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T23:03:37Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-16T23:18:37Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T00:03:41Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T00:18:38Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T00:33:38Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T00:48:39Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T03:48:39Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-17T04:48:39Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Continuum Memory Architectures for Long-Horizon LLM Agents", "link": "https://arxiv.org/abs/2601.09913", "summary": "arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \\textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order", "tags": ["docs", "templates"], "hash": "5acdf2373bec503d"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T17:37:14Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 344</p> <p># Comments: 227</p>", "tags": [], "hash": "cc9ea9a7cf1392fa"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T18:07:13Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 364</p> <p># Comments: 255</p>", "tags": [], "hash": "47cc2c86692ae294"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T18:22:13Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 367</p> <p># Comments: 265</p>", "tags": [], "hash": "ba7ecab8815fa723"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T18:52:13Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 376</p> <p># Comments: 279</p>", "tags": [], "hash": "4403f2604139e1cf"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T19:37:13Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 386</p> <p># Comments: 294</p>", "tags": [], "hash": "c5605d6a10a9e448"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T20:07:14Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 393</p> <p># Comments: 305</p>", "tags": [], "hash": "7fd5965cb2c63a34"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T20:52:14Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 406</p> <p># Comments: 323</p>", "tags": [], "hash": "1cec881d62e81c7f"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T21:07:14Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 414</p> <p># Comments: 324</p>", "tags": [], "hash": "77cbbda9fabbb7fe"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T21:22:17Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 419</p> <p># Comments: 330</p>", "tags": [], "hash": "00d7aad51649c505"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T22:37:15Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 439</p> <p># Comments: 364</p>", "tags": [], "hash": "797cef5e3a775eb2"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T22:52:15Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 441</p> <p># Comments: 367</p>", "tags": [], "hash": "c2db7b4edbb5a3a2"}
{"schema": "intel.signal.v1", "ts": "2026-01-18T23:07:15Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 444</p> <p># Comments: 371</p>", "tags": [], "hash": "cbbb653a640af5a7"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T01:07:16Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 472</p> <p># Comments: 402</p>", "tags": [], "hash": "0d638f400c7ecfee"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T01:37:16Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 476</p> <p># Comments: 406</p>", "tags": [], "hash": "9c327184909ee7d3"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T03:22:17Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 496</p> <p># Comments: 428</p>", "tags": [], "hash": "74f5828133abcf40"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T03:37:17Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 498</p> <p># Comments: 428</p>", "tags": [], "hash": "8b52d2775d409999"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T04:07:17Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 505</p> <p># Comments: 436</p>", "tags": [], "hash": "53d5194602c3afd1"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T05:07:18Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "BoxMind: Closed-loop AI strategy optimization for elite boxing validated in the 2024 Olympics", "link": "https://arxiv.org/abs/2601.11492", "summary": "arXiv:2601.11492v1 Announce Type: new Abstract: Competitive sports require sophisticated tactical analysis, yet combat disciplines like boxing remain underdeveloped in AI-driven analytics due to the complexity of action dynamics and the lack of structured tactical representations. To address this, we present BoxMind, a closed-loop AI expert system validated in elite boxing competition. By defining atomic punch events with precise temporal boundaries and spatial and technical attributes, we parse match footage into 18 hierarchical technical-tactical indicators. We then propose a graph-based pre", "tags": ["docs", "templates"], "hash": "217d7a3da07b91fd"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T06:07:18Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 519</p> <p># Comments: 451</p>", "tags": [], "hash": "4850b937fcb3a685"}
{"schema": "intel.signal.v1", "ts": "2026-01-19T06:52:18Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Predicting OpenAI's ad strategy", "link": "https://ossa-ma.github.io/blog/openads", "summary": "<p>Article URL: <a href=\"https://ossa-ma.github.io/blog/openads\">https://ossa-ma.github.io/blog/openads</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46668021\">https://news.ycombinator.com/item?id=46668021</a></p> <p>Points: 523</p> <p># Comments: 457</p>", "tags": [], "hash": "f095af28c14ecf80"}
{"schema": "intel.signal.v1", "ts": "2026-01-21T05:07:39Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.CR", "title": "De-Anonymization at Scale via Tournament-Style Attribution", "link": "https://arxiv.org/abs/2601.12407", "summary": "arXiv:2601.12407v1 Announce Type: new Abstract: As LLMs rapidly advance and enter real-world use, their privacy implications are increasingly important. We study an authorship de-anonymization threat: using LLMs to link anonymous documents to their authors, potentially compromising settings such as double-blind peer review. We propose De-Anonymization at Scale (DAS), a large language model-based method for attributing authorship among tens of thousands of candidate texts. DAS uses a sequential progression strategy: it randomly partitions the candidate corpus into fixed-size groups, prompts an ", "tags": ["docs", "templates"], "hash": "25b3869a450f64ec"}
{"schema": "intel.signal.v1", "ts": "2026-01-21T05:07:39Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.CR", "title": "SecureSplit: Mitigating Backdoor Attacks in Split Learning", "link": "https://arxiv.org/abs/2601.14054", "summary": "arXiv:2601.14054v1 Announce Type: new Abstract: Split Learning (SL) offers a framework for collaborative model training that respects data privacy by allowing participants to share the same dataset while maintaining distinct feature sets. However, SL is susceptible to backdoor attacks, in which malicious clients subtly alter their embeddings to insert hidden triggers that compromise the final trained model. To address this vulnerability, we introduce SecureSplit, a defense mechanism tailored to SL. SecureSplit applies a dimensionality transformation strategy to accentuate subtle differences be", "tags": ["docs", "templates"], "hash": "1324a6906bfe72a0"}
{"schema": "intel.signal.v1", "ts": "2026-01-22T05:07:47Z", "source": "rss", "feed": "https://export.arxiv.org/rss/cs.AI", "title": "Scalable Knee-Point Guided Activity Group Selection in Multi-Tree Genetic Programming for Dynamic Multi-Mode Project Scheduling", "link": "https://arxiv.org/abs/2601.14485", "summary": "arXiv:2601.14485v1 Announce Type: new Abstract: The dynamic multi-mode resource-constrained project scheduling problem is a challenging scheduling problem that requires making decisions on both the execution order of activities and their corresponding execution modes. Genetic programming has been widely applied as a hyper-heuristic to evolve priority rules that guide the selection of activity-mode pairs from the current eligible set. Recently, an activity group selection strategy has been proposed to select a subset of activities rather than a single activity at each decision point, allowing f", "tags": ["docs", "templates"], "hash": "2480cb7069a28f88"}
{"schema": "intel.signal.v1", "ts": "2026-01-22T20:23:04Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Taming P99s in OpenFGA: How we built a self-tuning strategy planner", "link": "https://auth0.com/blog/self-tuning-strategy-planner-openfga/", "summary": "<p>Article URL: <a href=\"https://auth0.com/blog/self-tuning-strategy-planner-openfga/\">https://auth0.com/blog/self-tuning-strategy-planner-openfga/</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46724542\">https://news.ycombinator.com/item?id=46724542</a></p> <p>Points: 3</p> <p># Comments: 0</p>", "tags": [], "hash": "85d0136536d7785c"}
{"schema": "intel.signal.v1", "ts": "2026-01-22T20:53:04Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Taming P99s in OpenFGA: How we built a self-tuning strategy planner", "link": "https://auth0.com/blog/self-tuning-strategy-planner-openfga/", "summary": "<p>Article URL: <a href=\"https://auth0.com/blog/self-tuning-strategy-planner-openfga/\">https://auth0.com/blog/self-tuning-strategy-planner-openfga/</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46724542\">https://news.ycombinator.com/item?id=46724542</a></p> <p>Points: 4</p> <p># Comments: 1</p>", "tags": [], "hash": "bb6f993ea6942092"}
{"schema": "intel.signal.v1", "ts": "2026-01-22T21:23:04Z", "source": "rss", "feed": "https://hnrss.org/frontpage", "title": "Taming P99s in OpenFGA: How we built a self-tuning strategy planner", "link": "https://auth0.com/blog/self-tuning-strategy-planner-openfga/", "summary": "<p>Article URL: <a href=\"https://auth0.com/blog/self-tuning-strategy-planner-openfga/\">https://auth0.com/blog/self-tuning-strategy-planner-openfga/</a></p> <p>Comments URL: <a href=\"https://news.ycombinator.com/item?id=46724542\">https://news.ycombinator.com/item?id=46724542</a></p> <p>Points: 4</p> <p># Comments: 1</p>", "tags": [], "hash": "bb6f993ea6942092"}
