Category: Technology

  • AAEON’s UP Squared Series Gains Full Mainline Linux Support for 40-pin GPIO Header

    Driver redesign led by Bootlin sees GPIO forwarder library and pinctrl driver merged into Linux 6.18 release. 

    (Eindhoven, The Netherlands – Feb 23) AAEON’s UP brand, a leading provider of professional developer boards, is excited to announce that full Linux kernel support for its UP Squared series’ 40-pin I/O header has been officially merged into the Linux 6.18 release.

    Following the brand’s 10-year anniversary last May, UP outlined its intention to complete the upstreaming of its DKMS drivers to the Linux mainline kernel. This objective was part of a broader set of initiatives aimed at providing users with a more streamlined route from concept to project deployment.

    Upstream support is a goal that AAEON had been working towards for a number of years. However, coordinating the FPGA and Intel® SoCs on UP hardware has made mainline Linux support for the 40-pin header a challenge.

    To resolve this issue and assist in pushing the project to completion, AAEON approached Bootlin, a leading embedded Linux and open-source development company. Bootlin’s embedded Linux development expertise was instrumental in resolving the pain points encountered during previous attempts to upstream support for its 40-pin header. By rewriting the pinctrl driver to remove Intel-specific code, adding a GPIO forwarder library, and extending the gpio-aggregator driver to create a reusable library, full upstream support was achieved. As a result, the UP Squared series’ 40-pin header now supports GPIO, I²C, UART, and SPI out-of-the-box on mainline Linux 6.18.

    “This achievement is the result of a multi-year effort and close collaboration with Bootlin, and one that will provide a huge benefit to the entire UP community,” said Victor Lai, Managing Director of UP and AAEON Europe. “With upstream integration for our UP Squared series now established, we are already working hard to expand this support across our product family and help even more users transform their ideas into real-world successes.”

  • India’s Power Transition Creates Clear Utility Divide

    ARE report finds JSW Energy and Tata Power best positioned for firm-power era; NTPC’s execution critical as coal economics tighten 

    SINGAPORE / NEW DELHI, INDIA, Feb 23 - India’s power sector is entering a decisive new phase as electricity demand surges, peak loads hit record highs, and the country moves toward its 500GW non-fossil capacity target by 2030 post a record 52GW capacity added in FY26But the next chapter of the transition will not be defined by installed capacity alone. 

    A new report by Asia Research & Engagement (ARE), Powering Net Zero: Pathways to Clean Energy for India’s Utility Companies, finds that the market is shifting toward firm, dispatchable and availability-linked power — creating clear divergence among India’s largest listed utilities. 

    The analysis identifies: 

    • JSW Energy and Tata Power as best placed to monetise the transition, combining contracted renewable growth, storage depth and improving cashflow quality. 
    • Adani Green Energy remains the fastest capacity scaler with strong long-term visibility, though storage integration remains at an early stage. 
    • NTPC, India’s largest generator, retains unmatched scale and sovereign-backed financing, but its transition outcomes hinge on execution speed and managing coal’s declining role. 
    • Adani Power remains predominantly thermal, with limited exposure to the structural upside from renewables and storage. 

    The report also highlights tightening coal economics. While new ultra-supercritical coal plants clear bids at INR5. 5–6 per kWh, effective delivered costs rise materially once utilisation, fuel volatility and compliance costs are factored in. By comparison, round-the-clock and storage-backed renewable projects are clearing between INR2.7–5.1 per kWh with availability guarantees embedded in contracts. 

    “The debate is no longer coal versus renewables,” said Arun Kumar, Strategic Advisor for Power Markets & Technology Innovation at ARE and lead author of the report. “As procurement shifts toward round-the-clock supply, reliability and execution — not just megawatts — will determine competitive advantage.” 

     “While this ARE study highlights significant momentum across the sector, it also identifies areas where sharper strategic clarity, improved contracting frameworks, and stronger delivery capabilities will be essential to meeting India’s long-term decarbonisation goals.” 

  • Thermography helps leading bike fitter find optimal cyclist position

    One of the most advanced bike-fitting studios worldwide is tapping into the benefit of Flir thermal imaging technology to push the boundaries of sports science and biomechanics at all levels of cycling.

     

    Located in Antwerp, Belgium, Bikefit Van Staeyen uses Flir-generated infrared images to visualize body heat and pressure distribution in real time, subsequently optimizing rider position and bike set up.

    Bikefit Van Staeyen offers professional bike fitting based on more than 20 years of experience in cycling. Founded by brothers Kevin and Michael Van Staeyen (a former professional road racing cyclist), the business has built its success on extensive expertise in sports science, biomechanics and cycling. What started as a passion for precision and performance evolved into one of the world’s most advanced bike-fitting studios.

    The principal differentiator of Bikefit Van Staeyen is its dual-expert approach: every bike fit is performed by both brothers working together, merging technical analysis and professional cycling experience with medical understanding.

    “This synergy allows us to identify patterns and dysfunctions far beyond what conventional systems can capture,” explains Kevin.

    Real-time insight

    Central to the process is the use of advanced thermal imaging technology from Flir, which provides a real-time view into physiological asymmetries, pressure distribution, and underlying muscular imbalances. 

     

    “We use a Flir infrared camera to study a heat map of a rider pedaling to optimize body position and bike set up,” says Kevin. “By combining thermography with motion tracking, force analysis, and EMG [electromyographic] data, we can see what others can only guess: how the rider’s body reacts, compensates, and adapts under load. We’ve named our thermography application ‘Lava.flow’, a process that allows us to understand and optimize injury-prone areas, muscle activation, and pressure points in a completely new way.”

    Bikefit Van Staeyen initially used a Flir E76 thermal imaging camera but has since migrated to the newer E96. The E96 is Flir’s first pistol-grip camera with 640 × 480 thermal resolution, allowing users to survey targets safely and quickly. This advanced sensor offers complete coverage of near and distant targets through a range of lens options. In addition, Flir Ignite provides the automatic uploading of E96 images directly from the camera to the cloud for easy, secure storage and sharing.

    As pioneers in thermal analysis for cycling applications, Bikefit Van Staeyen works in close collaboration with Thermal Focus, a Flir Platinum Partner and stockist of the largest selection of Flir infrared cameras in the Benelux (Belgium, Netherlands, Luxemburg) region.

    Temperature in focus

    The hot spots and cold spots identified by Flir thermal cameras serve as direct indicators of how a cyclist’s body functions on the bike. An excessive temperature increase in certain areas can indicate overexertion, friction, or poor posture. 

     

    Using the Flir E96, Bikefit Van Staeyen can: detect hot spots and elevated pressure zones on the saddle, shoes, or handlebars; identify asymmetric muscle loading and unbalanced activation patterns; analyze vascular restrictions that may lead to numbness or reduced performance; and detect thermal irregularities that could indicate overload.

    With this in-depth thermal analysis, the brothers are able to identify a range of issues that prompt adjustments for the optimal riding experience. For instance, asymmetric heat distribution around the kneecap points to a possible biomechanical problem, while too much heat in the ball of the foot typically means incorrect positioning of the cleat position. Similarly, increased temperature in the lower back could be the result of a compensatory mechanism or incorrect saddle adjustment.

    “While traditional bike fits are often based on observation and feel, we use objective, data-driven measurements from the Flir thermal camera,” reveals Kevin. “Our Lava.flow process gives us unique, real-time insight into how a rider’s body responds while cycling. By way of example, we recently helped a cyclist experiencing unexplained knee pain during rides. Using our Flir infrared imaging technology, we observed excessive heat accumulation in the tibia [tibialis anterior muscle]. Thanks to the Flir imaging of this increased heat and our leg length software, we discovered that this leg was structurally shorter and that the rider had to pull the pedal excessively upward when cycling, resulting in knee pain.”

    All levels of cyclist

    Cyclists turning to Bikefit Van Staeyen for assistance range from dedicated amateurs to World Tour professionals. They trust the company for the same reason: attention to detail. From saddle pressure to neural load; from crank dynamics to thermal asymmetry – no variable is left unexplored. The company is also pioneering the bike-fit domain at university level, a first in Europe, by collaborating with the University of Antwerp to integrate data-driven approaches.

    “We want to serve as the fundamental partner and reference point for thermal camera technology within the sport of cycling,” concludes Kevin. “Our ambition is to help shape the future of performance diagnostics, not just for our own athletes, but as a knowledge and technology hub for teams and riders worldwide. With our expertise and experience we can demonstrate the immense potential of thermography in biomechanical and performance analysis.”

  • Institut Pasteur injects new sustainable display capabilities inside Paris HQ with a network of Philips ePaper and EcoDesign digital signage

    PPDS, together with integration specialist Exaprobe and digital signage software partner, Telelogos, have combined their expertise to deliver unrivalled high quality, low energy visual performance and remote management capabilities to the Institut’s 3,000 plus staff, with a fleet of 20x sustainability conscious Philips Professional Displays.

     Amsterdam, Feb 20: PPDS, the exclusive global provider of Philips Professional Displays, is excited to announce that its Tableaux ePaper and 3000 Series EcoDesign digital signage displays have been selected to deliver a perfect tonic of high performance, low energy visual capabilities to Institut Pasteur’s 538,000 ft² biomedical research campus in Paris.

     Founded by Louis Pasteur in 1887, the Institut Pasteur is an internationally acclaimed not for profit research and education institute committed to the fight against infectious diseases in France and around the world. A recipient of 10 Nobel Prizes and employing over 3,000 staff, the Institut’s colossal five hectare campus features 39 separate buildings, including a conference centre, and a total of 48,000 m2 of laboratory space.

     Future proofed planning

    With such a vast campus and visual technology playing an increasingly important role in its day to day activities and communications, Institut Pasteur’s AV/IT management team sought to modernise its ageing display infrastructure. Concluding an extensive site review, the project would include a fleet of 20x dynamic displays, strategically placed to support a variety of needs, settings, and light environments, including reception halls, meeting rooms, laboratories, and more.

     The project presented a number unique challenges. As a site of historical significance – containing several listed buildings – retaining the aesthetics during any modernisation, while ensuring minimal disruptions to staff, was imperative. Furthermore, displays would need to meet the Institut Pasteur’s strict standards for electrical safety and durability, while providing greater energy efficiency to reduce its carbon footprint.

     Romain Gentile, Key Account Manager at PPDS, commented: “Performance, readability, and energy efficiency were all key, with the displays disseminating scientific, institutional, and logistical information. For Institut Pasteur, and the invaluable work they do, there can be absolutely no compromise.”

     Effective communication

    Working with AV/IT integration specialists, Exaprobe, PPDS’ multi award winning ‘zero power’ 32” Philips Tableaux ePaper and 55” Philips Signage 3000 Series EcoDesign displays were immediately identified as the only solutions capable of meeting – and ultimately surpassing – the Institut’s high expectations.

     Signalling a new era of visual communications and sustainability, the introduction of Philips Tableaux ePaper displays was selected primarily – but not exclusively – to provide wayfinding information, such as mapping, campus information, and other instructions, to help visitors navigate around the site.

     Fully portable and able to be used entirely unplugged – ideal for use in Institut’s older buildings and in spaces with limited power sources – each Philips Tableaux is capable of displaying full colour, static imagery for days, weeks, months, or even years without using a single kilowatt of energy. The only time Philips Tableaux displays require power is during content updates, with one image change using just 0.0025 kWh.

     The Gold standard

    For the institute’s more advanced and detailed visual needs on a grander scale, including for presentations, corporate videos, and other internal communications, the future proof Philips Signage 3000 Series EcoDesign was the standout choice, ticking all boxes for both performance and sustainability.

     In addition to delivering high impact 4K Ultra HD visual quality, with picture perfect performance down to the smallest detail – be that videos, pictures or numbers – the Philips Signage 3000 Series EcoDesign enables the screens to use less than 50 per cent of the power compared to other digital signage models in the market, without compromising on performance.   

     The Philips Signage 3000 Series EcoDesign display is also the industry’s first EPEAT Climate+ Gold certified display of its kind, which measures the social and environmental impacts of products from extraction to end of life. The Philips Signage 3000 EcoDesign meets the most demanding set of criteria for sustainability leadership in electronics.

     24/7 management

    Both the Philips Tableaux and Philips Signage 3000 Series EcoDesign are also members of PPDS’ growing portfolio of integrated Android SoC displays, offering a vast range of features and benefits, including secure and seamless remote management capabilities with trusted partners.

     Enabling centralised management of the new fleet, PPDS partner, Telelogos’ Media4Display solution was selected and integrated, providing round-the-clock monitoring and management. This also allows the Pasteur technical team to schedule content in real time without having to travel or manually update, further reducing their carbon footprint and costs. 

     The full integration proceeded successfully and without disrupting research activities. Connectivity and content management tests were carried out on site, ensuring a seamless transition.

     Franck Fromet, AV Manager, Institut Pasteur, commented: “The integration of our new Philips Professional Displays has enabled us to modernise our communication while respecting our environmental commitment. The PPDS teams understood our constraints and proposed a solution that is understated, elegant, and effective.”

     Romain Gentile concluded: “The Institut Pasteur now benefits from a modern, sustainable display system that is fully adapted to its scientific environment. Information is disseminated more effectively, content is updated instantly, and energy consumption has been significantly reduced.”

  • Liquibase Secure 5.1 Extends Modeled Change Control to Snowflake

    New release makes Snowflake control plane changes governable and auditable across access, data movement, and execution, and adds support for Couchbase, AWS Keyspaces, DataStax Enterprise, and AlloyDB.

     

    Austin, TX — Feb 20— Liquibase, the leader in Database Change Governance, today announced the release of Liquibase Secure 5.1, extending modeled Change Control to Snowflake. With 5.1, enterprises can govern Snowflake control plane changes with the same rigor and automation they already apply to schema evolution, closing a critical gap in data platform security, compliance, and AI readiness. Liquibase Secure 5.1 also expands database platform coverage, including new support for additional cloud and enterprise data stores.

     

    Snowflake has become mission-critical infrastructure for analytics, data products, and AI initiatives. As organizations scale DataOps and internal developer platforms, Snowflake changes are no longer isolated technical updates. They are platform-level changes that impact trust, availability, and every downstream consumer. Yet many of the most consequential changes still happen outside standardized governance, often delivered as scripts with limited visibility, weak enforcement, and evidence that is difficult to assemble when it matters most.

     

    “As enterprises modernize their developer platforms for AI-driven delivery, change control at the database layer has become a prerequisite, not a nice-to-have,” said Mirek Novotny, Sr. Director of Product at Liquibase. “If Snowflake control plane changes aren’t governed and observable, you can’t prove control. Liquibase Secure 5.1 brings predictability and evidence to the changes that matter most, without slowing teams down.”

     

    Modeled Change Control for Snowflake

     

    Liquibase Secure 5.1 treats key Snowflake control plane changes as first-class, modeled change types, rather than opaque scripts. That modeling enables precise policy enforcement, object-aware drift detection, and audit-ready evidence at the level where access, movement, and execution are defined.

    With Liquibase Secure 5.1, data platform teams can govern Snowflake changes across access and security configuration, data sharing and movement, platform and cost controls, and automated execution, using standardized workflows across environments and teams.

    Key outcomes include:

    • Stop risky Snowflake control plane changes before they reach production
    • Standardize how Snowflake changes are delivered across environments and teams
    • Automatically generate audit-ready evidence for every change
    • Detect drift and out-of-band updates to governed Snowflake objects
    • Recover faster with traceable, reversible changes and tested rollback procedures

    This closes a long-standing gap for organizations that govern schema evolution, yet still struggle with over-permission creep, ungoverned data movement, and control plane drift that can undermine security posture and AI initiatives.

     

    Built for DataOps, data products, and AI readiness

    As Snowflake increasingly powers feature engineering, model training, and AI-driven decisioning, the blast radius of ungoverned change grows. A single access change can expose sensitive training data. An unreviewed sharing update can expand compliance scope. An execution change can silently alter business-critical logic. Liquibase Secure 5.1 helps data platform teams keep Snowflake predictable, auditable, and reliable as usage scales, without turning governance into a bottleneck.

     

    Expanding database support across Liquibase’s industry-leading coverage

    Liquibase Secure continues to deliver broad database coverage across 60+ platforms, from mainframe DB2 to cloud-native data stores. Liquibase Secure 5.1 expands support for Snowflake, Databricks, and MongoDB, and adds new platform support for Couchbase, AWS Keyspaces, DataStax Enterprise, and AlloyDB for Google Cloud. This breadth helps enterprises standardize change governance across heterogeneous environments using a single platform instead of stitching together siloed tools and processes. Teams can apply consistent workflows and generate unified, audit-ready evidence across their database estate, reducing operational overhead while preserving the flexibility to adopt new technologies without rebuilding governance each time.

     

    Enterprise partnership, not just tooling

    Liquibase brings more than a decade of frontline experience helping enterprises govern database change at scale. In addition to the platform, Liquibase provides hands-on professional services, a dedicated customer success organization, and ongoing advisory support to help teams operationalize Change Control across their delivery model.

  • Deevia Software Wins Toshiba GridDB IoT Hackathon, Bengaluru Teams Sweep Spots

    Bengaluru, Feb 19: Toshiba Digital Solutions Corporation, in collaboration with Toshiba Software India Private Limited, successfully concluded the GridDB Cloud IoT Hackathon, an innovation initiative aimed at enabling students, developers, and startup companies to build real-time IoT applications using Toshiba’s GridDB Cloud Database Service for Big Data and IoT.

    The hackathon received strong participation from October 29 to December 14 2025, with over 250 applications from across India, addressing use cases including Healthcare, Finance, IoT, and Knowledge Management, demonstrating the versatility of GridDB database service.

    After an intensive evaluation process, five finalist teams advanced to the in‑person final rounds held in Bengaluru from January 31 to February 1, 2026. During the onsite sessions, they received direct technical support and mentorship from the GridDB technical team, helping them further refine their proof‑of‑concept (PoC) solutions. Following the final PoC presentations to a panel of judges, the results were announced as follows and all the finalists shared a total cash prize of USD 5,000.

    The GridDB Cloud IoT Hackathon helped create a community of innovators who are eager to use real-time data to solve real-world problems. Toshiba will continue to engage with participants through the GridDB® community and future initiatives in India, strengthening its contribution to the country’s digital transformation and IoT ecosystem.

    Commenting on the initiative, Mr. Hiroshi Tsukino, Director and Vice President of Toshiba Digital Solutions Corporation said,

    “Toshiba is committed to making a better world through the power of data by utilizing various kinds of data generated by businesses related to social infrastructure and turning them into platforms. The Toshiba Group is pursuing a strategy of transformation toward Digital Evolution (DE), Digital Transformation (DX), and Quantum Transformation (QX) to develop the digital economy, and India is a key innovation hub in Toshiba’s global digital strategy. Through initiatives like the GridDB Cloud IoT Hackathon, we are empowering developers with advanced data platforms to create scalable, real-time solutions that address complex industrial and societal challenges.”

    Addressing the participants and judges, Mr. Ramdas Baliga, Managing Director, Toshiba Software India Private Limited added,

    “Toshiba Software India’s key strategic direction is to evolve into a digitally agile Centre of Excellence, embedding digital thinking and speed across everything we do, and translating advanced technologies into real-world IoT solutions. Initiatives such as the GridDB Cloud IoT Hackathon reflect this commitment by bringing together Toshiba’s experts and engineers with next-generation innovators to showcase technologies that will shape the future. I am pleased to see the event attract many such ideas, and proud that Toshiba could support these innovators in advancing their vision.”

    Sharing their experience, Mr. Nitin Mangalashankar, hackathon participant of the winner team – Deevia Software India Private Limited said,

    “From a hands-on perspective, the hackathon was a great opportunity to quickly prototype and validate a GenAI-driven use case on top of real-time operational data using GridDB Cloud. The platform allowed us to ingest and query time-series data efficiently under tight timelines, making it easy to focus on experimentation and solution design rather than infrastructure challenges.”

  • Rackspace and Palantir Partner to Run Foundry and AIP in Production with Governed Managed Operations

    Customers to gain accelerated AI-driven business outcomes from implementation expertise, cloud hosting, and data migration support in a governed operating model

     San Antonio, TX – Feb 19– Rackspace Technology® (NASDAQ: RXT), a hybrid multicloud and AI solutions company, and Palantir Technologies Inc. (NASDAQ: PLTR), a global leader in operational artificial intelligence platforms, today announced a strategic partnership to help enterprises rapidly deploy and operate Palantir’s Foundry and Artificial Intelligence Platform (AIP) in production to achieve measurable business outcomes.

     Through this partnership, Rackspace’s governed operating model will provide consistent security, operating controls and compliance from edge to core to cloud enabling customers to deploy AI use cases with Palantir in production in weeks or months versus months or years. The companies are also collaborating to run Palantir software in Rackspace’s Private Cloud and UK Sovereign data centers. This is especially critical for regulated industries where AI deployments must meet strict data sovereignty and compliance requirements. 

     Organizations struggle to extract business value from AI and data platforms because deploying and operating these systems at scale requires specialized expertise they often don’t have in-house. As Palantir’s strategic partner in data migration and global implementation services, Rackspace will help customers prioritize their most high-impact business problems, then deliver implementation, including data readiness, hosting, and ongoing managed operations of Palantir’s platform to realize outcomes. As part of this collaboration, Rackspace has 30 Palantir-trained engineers to provide data migration and apply a forward deployed approach to solving high impact customer problems and is on track to scale to over 250 in the next 12 months.

     “Organizations need AI that works in production, not just in demos,” said Gajen Kandiah, CEO of Rackspace Technology. “Palantir’s platform, combined with Rackspace’s governed cloud operations and our shared forward deployed engineering approach, enables customers to accelerate time to value and drive competitive business impact with governance and security. This is especially important in regulated industries.”

     The partnership combines Rackspace’s 25 years of experience managing mission-critical enterprise workloads across hybrid environments with Palantir’s decision-intelligence platform. Customers can benefit from a turnkey deployment model designed to reduce risk and operational burdens and accelerates time to value. For regulated and data-sensitive organizations, this partnership aims to deliver greater confidence to deploy advanced AI capabilities in a private cloud environment that meets sovereignty, security, and residency requirements.

     “Organizations that adopt our AI Operating Systems fundamentally change their unit economics. In the context of migrating complex data environments, Palantir AIP is taking completion timelines from years to days. Rackspace will help our customers accelerate their pace of adoption and as a result, lead their respective industries,” said Sameer Kirtane, Head of US Commercial at Palantir.

     Integrated Service Delivery Across the Stack

    Customers want a consistent way to deploy, govern, and operate AI across their data environments, with accountability and measurable outcomes. Unlike point solutions that require customers to manage infrastructure, data pipelines, and AI operations separately, this partnership is aimed at providing end-to-end infrastructure hosting, data migration, implementation services and ongoing managed operations as an integrated service.

     

  • ManageEngine Introduces Causal Intelligence and Autonomous AI to IT Operations for Faster Incident Response

    Egypt, Cairo, Feb 18 – ManageEngine, a division of Zoho Corporation and a leading provider of enterprise IT management solutions, today added new causal intelligence and autonomous AI capabilities in Site24x7, its full-stack observability platform. These enhancements transform how enterprises handle outages, shifting from firefighting to autonomous resilience. By drastically reducing mean time to recovery (MTTR) and ensuring service-level agreement (SLA) compliance, Site24x7 helps IT teams safeguard the customer experience and retain trust.

    Modern IT environments are increasingly fragmented across hybrid clouds, microservices, and dynamic networks, generating massive volumes of telemetry and predictive anomaly signals every second. When an incident occurs, this complexity turns troubleshooting into a needle-in-a-haystack search, often leading to prolonged downtime. IT teams struggle to correlate anomaly signals and events across these layers, delaying the critical fix to restore normalcy, jeopardizing brand reputation.

    “Hybrid and cloud-native architectures have made IT operations highly interconnected, while IT managers are under constant pressure to resolve incidents quickly amid growing complexity,” said Srinivasa Raghavan, director of product management at ManageEngine. “By combining predictive anomaly detection, intelligent event correlation, service dependency context, and AI-driven causal insights, Site24x7 cuts through alert noise to show not just what is broken, but what caused it and what it impacts, helping teams identify the true fault faster and significantly reduce MTTR while minimizing service disruption.”

    “Triaging and resolving incidents in hybrid environments with growing infrastructure complexity can quickly become a nightmare, especially when SLA commitments are on the line,” said Pravir Kumar Sinha, IT leader at Synechron, a global IT services company and one of the early customers to access the feature. “With Site24x7 AIOps , we’re able to filter out nearly 90% of alert noise, pinpoint issues faster, and accelerate resolution. This helps us achieve stronger SLA adherence, reduce MTTR, and ultimately deliver reliable digital experience for customers.”

    The introduction of autonomous AI in Site24x7 represent a practical step toward more autonomous IT operations by analyzing observability data, reducing cognitive overload, and turning insights into clear, actionable guidance. “With MCP providing the control and governance layer, we ensure this intelligence is applied securely and within enterprise guardrails. This empowers IT leaders move toward agentic workflows with confidence, stay ahead of the AI adoption curve, and strengthen the resilience of their critical digital services,” said Raghavan.

    Key capabilities include:

    • Domain-aware causal correlation with predictive anomaly detection: Detects anomalies and correlates related signals across applications, infrastructure, and networks into a single, context-rich problem—so teams can quickly understand what is connected and where to start.
    • Customizable AI Agents with governed, task-driven automation: Enables customers to create and tailor AI Agents, set approved guardrails using solution documents, and assign tasks that guide agents from analysis to guided action—making response workflows more consistent across teams.
    • MCP-enabled agentic foundation for customers: MCP provides the enabling layer for customers to build and operationalize agentic use cases on top of observability data—standardizing how agents access data, follow approved guidance, and execute tasks within enterprise-ready controls and auditability.
    • Orchestrated remediation with Qntrl: Co-ordinates downstream actions through structured workflows and repeatable runbooks, powered by Zoho’s workflow and orchestration platform Qntrl, with approvals and traceability built in to support controlled automation.

    These AIOps capabilities are now available for all users in Professional and Enterprise plans.

  • Seco® High Feed SP07 reduces inventories and maximizes productivity

    Capable of handling a wide mix of materials, Seco® High Feed SP07 excels in all machining strategies and allows you to push productivity levels, particularly on complex components.

     

    A positive cutting rake angle ensures optimal chip formation, while the stable insert design and constant lead angle deliver predictable cutting behavior, paramount for unmanned production.

    Reduce the need for skilled labor

    The SP07 addresses common industry challenges: frequent tool changes, unpredictable results, and high costs due to rapid wear. In one reliable solution, it simplifies tool management and reduces the need for skilled labor. Digital traceability via Data Matrix codes further streamlines operations, making the SP07 ideal for high-volume and unmanned production.

    High metal removal rates in shallow depths of cut

    Each insert features four cutting edges, maximizing usage and extending tool life. Even with shallow depths of cut (≤0.8 mm), the SP07 maintains high metal removal rates, ensuring manufacturers stay on track with productivity goals. The result is a significant reduction in cost per part and improved operational efficiency.

    “Our customers need to boost productivity and cut costs. Seco® High Feed SP07 delivers reliable, flexible performance across materials”, says Benoît Patriarca, Product Manager Copy High Feed Milling. “The four cutting edges and digital traceability simplify processes further, even when skilled labor is limited.”

    With its origins in Fagersta, Sweden and present in more than 75 countries, Seco is a leading global provider of metal cutting solutions for indexable milling, solid milling, turning, holemaking, threading and tooling systems. For nearly 100 years, Seco has driven excellence throughout the entire manufacturing journey, ensuring high-precision machining and high-quality 

  • Brookhaven Lab Builds Successful ‘Cloud in a Box’

    Feb 18: In a quiet laboratory, a team of atmospheric scientists and engineers at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory recently gathered around a workstation to watch as little floating speckles, illuminated by a curtain of green light, swirled into a haze, then wisp of a cloud.

    This instance of creation unfolded inside a programmable atmosphere they’d built from scratch.

    “We saw the birth of a cloud,” said Brookhaven atmospheric scientist Arthur Sedlacek. “There was a lot of excitement and happiness, and relief, in that moment. Needless to say, we definitely weren’t quiet after that.”

    Researchers will use the new convection cloud chamber, a customizable one-cubic-meter metal box, to tackle fundamental unknowns that remain about clouds.

    Clouds might seem simple — white, fluffy shapes drifting overhead — but they remain one of the biggest sources of uncertainty in models of weather and Earth’s complex atmospheric system.

    Scientists know that clouds play important roles in regulating Earth’s energy balance, controlling how water moves through the atmosphere, driving storm formation, and influencing how intense weather systems become. Still, researchers’ understanding of the physics underlying cloud processes is limited.

    “We need repeatable, controlled experiments in order to tease out the key factors and mechanisms governing those underlying small-scale processes,” Sedlacek said. “For example, one long-standing unsolved problem in our community is how drizzle or raindrops are formed in warm clouds. Why do some clouds precipitate while others do not?”

    Collecting key and abundant measurements from clouds in nature, while challenging, provides some data needed to address these questions. Brookhaven scientists and their collaborators have piloted specially equipped aircraft through clouds to collect such data. But each flythrough hits a cloud that has already changed since the plane’s first pass.

    The cloud chamber will allow scientists to study clouds in a more controlled setting.

    “The cloud chamber provides us with a unique environment to isolate and rigorously study important but still poorly understood cloud microphysical processes,” said Brookhaven atmospheric scientist Fan Yang. “We can use it to mimic real atmospheric clouds under well-controlled laboratory conditions and perform detailed, repeatable cloud measurements.”

    Watch as a cloud forms in the chamber. Scientists use a green laser to see the process. (Brookhaven National Laboratory)

    Controlled cloud making

    Brookhaven Lab’s convection cloud chamber combines ingredients needed to make a cloud: air that is supersaturated with water and aerosol particles, tiny particles suspended in the atmosphere that can trigger the condensation of water vapor into cloud droplets.

    Scientists first fill the chamber’s bottom baseplate with water. Then they heat it up, releasing water vapor into the chamber through evaporation. The top panel of the box is cold. As the warm water vapor from the bottom rises and mixes with cool air from the top, it builds up an atmosphere where the air is “thick” with humidity.

    “Cloud formation requires the relative humidity to be greater than 100% — a condition we refer to as supersaturation,” Sedlacek said. “Such a supersaturated environment is achieved in the chamber by the mixing of warm humid air with cold humid air.”

    To trigger cloud droplet formation in this supersaturated atmosphere, scientists inject aerosol particles, such as table salt, into the chamber to serve as “seeds” for cloud formation. When water vapor from the air condenses on the salt particles, it forms tiny cloud droplets. In the humidified environment, these droplets will continue to grow through additional condensation of water vapor. Eventually, this establishes a steady state between the cloud droplet particle size and the relative humidity.

    “One major advantage of a convection cloud chamber, compared with other types of cloud chambers, is that we can maintain a turbulent cloud for hours in a steady state,” Yang said. “This will allow repeated measurements of cloud properties, which improves statistical robustness.”

    The cloud chamber at Brookhaven Lab is made up of individual heating and cooling side panels that allow researchers to steer settings such as relative humidity, temperature, and the degree of mixing and swirling in the air, or turbulence, to create a complex structure. Rearranging the heating and cooling side panels will allow the creation of different internal chamber conditions, resulting in more complex cloud schemes to be formed. Additionally, the chamber is designed so that scientists can measure the influences of things like aerosol composition and size, temperature on cloud formation, cloud droplet size distribution, and cloud persistence.

    “From an experimental perspective, there are lots of knobs we can turn to create specific atmospheric conditions within the chamber,” Sedlacek said. “We’ve started thinking about how we can incorporate artificial intelligence and machine learning into the cloud chamber’s workflow.”

    The unique modular design also offers flexibility for the future. For example, the structure is meant be expandable. Adding another cubic meter on top would expand the working volume, leading to increased cloud lifetime. This would open the door to even more ambitious studies of drizzle and rain drop formation, the researchers said.

    Making measurements with advanced imaging

    A crucial component of these studies is using tools that can take measurements inside the cloud chamber without touching and disrupting the cloud and its environment. The Brookhaven team is developing next-generation instrumentation and methods to make this possible.

    “We want to be able detect the transition of aerosols to cloud droplets to drizzle without sticking instruments inside the chamber so that we don’t disrupt the air flow,” Sedlacek said. “To realize this goal, we’ll use light.”

    Scientists aim to, first, detect aerosols particles that activate into cloud droplets by tagging the particles with fluorescent dye. Tagged and activated aerosols will light up when hit by a laser. Next, researchers will use time-correlated photon-counting lidar — a laser-based remote-sensing instrument — to observe a cloud’s structure at the scale of a single centimeter. Then, to detect drizzle and follow its movement within the cloud chamber, they plan to use novel THz radar that captures individual droplets and measures how fast they fall.

    Powered by collaboration

    What started out as brainstorming, scribbles, and long chats turned into a solid design for a successful convection cloud chamber — one of only two in the nation — thanks to close collaboration between scientists, engineers, and support staff across Brookhaven Lab.

    “The expertise necessary to create something like this chamber requires modelers, observationalists, experimentalists, and engineers to pull it all together — and that is part and parcel of what national labs do,” Sedlacek said.

    Engineers from the Lab’s Instrumentation Department and scientists from the Environmental Science and Technology Department began collaborating on the cloud chamber a few years ago, after a meeting that highlighted Instrumentation’s capabilities and how they could support scientific research. That discussion sparked the idea to build a cloud chamber together.

    As the team formed, engineers refined the design while learning more about the scientific requirements — especially the need for precise temperature control.

    “It was a very iterative process,” said mechanical engineer Nathaniel Speece-Moyer. “We have great people and resources on site, and we used our engineering judgment to weigh different design options with frequent input from the scientific staff. We converged on a final design that the group is happy with.”

    The final design is modular and carefully controls temperature while ensuring that air and particles inside the chamber remain undisturbed. All of the hardware is located outside the chamber to avoid interfering with experiments.

    Many of the components were fabricated in house by Brookhaven Lab’s fabrication services, which reduced costs and allowed the engineering team to make adjustments along the way, said mechanical engineer Connie-Rose Deane.

    “This cloud chamber is a great example of how engineers, scientists, and technicians can collaborate together to achieve something special,” Deane said. “We also had a lot of support from budget, safety, and facilities staff. What really powered me through this work was the excitement everyone brought to the project.”

    Throughout the process, the team also drew on experience gained from the Michigan Technological University’s (MTU) Pi Cloud Chamber, the only other convection cloud chamber in the United States. Raymond Shaw, a professor at MTU, has a joint appointment with Brookhaven’s Environmental Science and Technology Department and was key to developing both chambers.

    “Cloud chamber science is experiencing a resurgence for several reasons,” Shaw said. “Perhaps most importantly, the atmospheric physics community has realized that there are still fundamental questions about how aerosol and cloud particles interact that directly influence how we can simulate atmospheric flows using coarse-resolution models, such as for storm or weather forecasting. The simplified, controlled, repeatable, and well-characterized conditions provided by a laboratory experiment in a cloud chamber can provide important insights.”

    At the same time, additional advances now make it possible to simulate these processes in great detail, enabling direct comparisons between experiments and computational models, Shaw said.

    Yang added: “The cloud chamber at Brookhaven Lab is the outcome of more than 10 years of experience. We’ve learned a lot from the Michigan Tech Pi Cloud Chamber group and from a multi-institution research activity jointly funded by DOE and the National Science Foundation aimed at exploring ideas for a larger-scale cloud chamber facility. We want to shout out all the work that led to this very smart design.”

    Scientists, engineers, and technicians worked together to assemble Brookhaven Laboratory’s convection cloud chamber. (Timothy Kuhn/Brookhaven National Laboratory)

    Looking beyond the clouds

    The potential of Brookhaven Lab’s new “cloud in a box” testbed stretches beyond just studying clouds. Its creators encourage suggestions for other research areas it can support. 

    Ideas floated for potential uses so far include investigations into how atmospheric conditions impact the performance of energy and information infrastructure, as well as the movement of bioaerosols — tiny natural particles such as pollen and pathogens.

    “The environment we create inside this chamber opens up other applications,” Sedlacek said. “We welcome the opportunity for ‘out-of-the-box’ ideas that this brand-new capability at Brookhaven Lab can provide.”

    This work was supported by Brookhaven’s Laboratory Directed Research and Development program.

     Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.