Blog

  • Make a Microsoft graph call using javascript

    Unlocking Microsoft 365 Data with JavaScript

    Imagine this: your team is building a productivity app that needs to pull in user calendars, emails, or OneDrive files from Microsoft 365. You’ve heard of Microsoft Graph, the unified API endpoint for accessing Microsoft 365 data, but you’re not sure where to start. The documentation feels overwhelming, and you just want to see a working example in JavaScript. Sound familiar?

    Microsoft Graph is a goldmine for developers. It allows you to interact with Microsoft 365 services like Outlook, Teams, OneDrive, and more—all through a single API. But getting started can be tricky, especially when it comes to authentication and managing API calls securely. In this guide, I’ll walk you through how to set up and make your first Microsoft Graph API call using JavaScript. Along the way, I’ll share some hard-earned lessons, gotchas, and tips to ensure your implementation is both functional and secure.

    Before We Dive In: Security Implications

    Before writing a single line of code, let’s talk security. Microsoft Graph requires OAuth 2.0 for authentication, which means you’ll need to handle access tokens. These tokens grant access to sensitive user data, so mishandling them can lead to serious security vulnerabilities.

    🔐 Security Note: Never hardcode sensitive credentials like client secrets or access tokens in your codebase. Use environment variables or a secure secrets management service to store them.

    Additionally, always request the minimum set of permissions (scopes) your app needs. Over-permissioning is not only a security risk but also a violation of Microsoft’s best practices.

    Step 1: Setting Up the Microsoft Graph JavaScript Client Library

    The easiest way to interact with Microsoft Graph in JavaScript is by using the official @microsoft/microsoft-graph-client library. This library simplifies the process of making HTTP requests and handling responses.

    First, install the library via npm:

    npm install @microsoft/microsoft-graph-client

    Once installed, you’ll also need an authentication library to handle OAuth 2.0. For this example, we’ll use msal-node, Microsoft’s official library for authentication in Node.js:

    npm install @azure/msal-node

    Step 2: Authenticating with Microsoft Graph

    Authentication is the trickiest part of working with Microsoft Graph. You’ll need to register your application in the Azure portal to get a client_id and client_secret. Here’s how:

    1. Go to the Azure Portal and navigate to “App Registrations.”
    2. Click “New Registration” and fill in the required details.
    3. Once registered, note down the Application (client) ID and Directory (tenant) ID.
    4. Under “Certificates & Secrets,” create a new client secret. Store this securely; you’ll need it later.

    With your app registered, you can now authenticate using the msal-node library. Here’s a basic example:

    const msal = require('@azure/msal-node');
    
    // MSAL configuration
    const config = {
      auth: {
        clientId: 'YOUR_APP_CLIENT_ID',
        authority: 'https://login.microsoftonline.com/YOUR_TENANT_ID',
        clientSecret: 'YOUR_APP_CLIENT_SECRET',
      },
    };
    
    // Create an MSAL client
    const cca = new msal.ConfidentialClientApplication(config);
    
    // Request an access token
    async function getAccessToken() {
      const tokenRequest = {
        scopes: ['https://graph.microsoft.com/.default'],
      };
    
      try {
        const response = await cca.acquireTokenByClientCredential(tokenRequest);
        return response.accessToken;
      } catch (error) {
        console.error('Error acquiring token:', error);
        throw error;
      }
    }
    

    In this example, we’re using the “client credentials” flow, which is ideal for server-side applications. If you’re building a client-side app, you’ll need to use a different flow, such as “authorization code.”

    Step 3: Making Your First Microsoft Graph API Call

    Now that you have an access token, you can use the Microsoft Graph client library to make API calls. Let’s fetch the authenticated user’s profile using the /me endpoint:

    const { Client } = require('@microsoft/microsoft-graph-client');
    require('isomorphic-fetch'); // Required for fetch support in Node.js
    
    async function getUserProfile(accessToken) {
      // Initialize the Graph client
      const client = Client.init({
        authProvider: (done) => {
          done(null, accessToken);
        },
      });
    
      try {
        const user = await client.api('/me').get();
        console.log('User profile:', user);
      } catch (error) {
        console.error('Error fetching user profile:', error);
      }
    }
    
    // Example usage
    (async () => {
      const accessToken = await getAccessToken();
      await getUserProfile(accessToken);
    })();
    

    This code initializes the Microsoft Graph client with an authentication provider that supplies the access token. The api('/me').get() call retrieves the user’s profile information.

    💡 Pro Tip: Use the select query parameter to fetch only the fields you need. For example, client.api('/me').select('displayName,mail').get() will return only the user’s name and email.

    Step 4: Handling Errors and Debugging

    Working with APIs inevitably involves error handling. Microsoft Graph uses standard HTTP status codes to indicate success or failure. Here are some common scenarios:

    • 401 Unauthorized: Your access token is invalid or expired. Ensure you’re refreshing tokens as needed.
    • 403 Forbidden: Your app lacks the required permissions. Double-check the scopes you’ve requested.
    • 404 Not Found: The endpoint you’re calling doesn’t exist. Verify the API URL.

    To debug issues, enable logging in the Microsoft Graph client:

    const client = Client.init({
      authProvider: (done) => {
        done(null, accessToken);
      },
      debugLogging: true, // Enable debug logging
    });
    

    Step 5: Scaling Your Implementation

    Once you’ve mastered the basics, you’ll likely want to scale your implementation. Here are some tips:

    • Batch Requests: Use the /$batch endpoint to combine multiple API calls into a single request, reducing latency.
    • Pagination: Many Microsoft Graph endpoints return paginated results. Use the @odata.nextLink property to fetch additional pages.
    • Rate Limiting: Microsoft Graph enforces rate limits. Implement retry logic with exponential backoff to handle 429 Too Many Requests errors.

    Conclusion

    By now, you should have a solid understanding of how to make Microsoft Graph API calls using JavaScript. Let’s recap the key takeaways:

    • Use the @microsoft/microsoft-graph-client library to simplify API interactions.
    • Authenticate securely using the msal-node library and environment variables for sensitive credentials.
    • Start with basic API calls like /me and gradually explore more advanced features like batching and pagination.
    • Always handle errors gracefully and implement retry logic for rate-limited requests.
    • Request only the permissions your app truly needs to minimize security risks.

    What will you build with Microsoft Graph? Share your thoughts and questions in the comments below!

  • How to start edge browser with work profile in command line.

    Imagine this: It’s Monday morning, you’ve just sat down at your desk, coffee in hand, ready to tackle your inbox. You hit your shiny new Stream Deck button to launch Outlook in Microsoft Edge, expecting your work profile to appear—only to be greeted by your personal account, memes and shopping carts included. Frustrating, right? If you’re juggling multiple profiles in Edge, you know the pain of always landing in the wrong one. Let’s fix that for good.

    Why Profiles Matter in Microsoft Edge

    Edge does a stellar job separating work and personal profiles, keeping your professional life distinct from your weekend browsing. But when you launch Edge from the command line (or automate it with tools like Stream Deck), it defaults to your personal profile. Not ideal if you’re trying to keep your work and personal worlds apart.

    The Command That Gets You There

    After some digging (and a few choice words), I found the solution. You can specify which profile Edge should use when launching a site. Here’s the magic command:

    start msedge --profile-directory="Profile 1" https://outlook.office.com/owa/

    How It Works

    • start msedge: Launches Microsoft Edge from the command line.
    • --profile-directory="Profile 1": Tells Edge which profile to use. “Profile 1” is usually your first added profile, but it can vary.
    • https://outlook.office.com/owa/: Opens Outlook Web Access directly in your chosen profile.

    Practical Tips & Gotchas

    • Find Your Profile Name: The profile directory name isn’t always obvious. To check yours, go to %LOCALAPPDATA%\Microsoft\Edge\User Data and look for folders like Profile 1, Profile 2, etc. Match the folder to your desired profile.
    • Spaces in Paths: If your profile name has spaces, keep the quotes around it. Otherwise, Edge will get confused.
    • Automating with Shortcuts: You can put this command in a batch file or use it with automation tools like Stream Deck, AutoHotkey, or Windows Task Scheduler.
    • Multiple Profiles: If you have more than two profiles, make sure you’re using the correct directory name. “Profile 1” is not always your work profile!

    My Take

    If you care about keeping your work and personal lives separate (and you should), this command is a must-have in your productivity toolkit. Don’t settle for Edge’s default behavior—take control and make your workflow seamless.

    Bonus: Open Any Site with Any Profile

    Want to open any site with a specific profile? Just swap out the URL:

    start msedge --profile-directory="Profile 2" https://github.com/

    Now go automate your day like a pro.

  • How to always show full right click menu in windows 11

    Picture this: You’re deep in code, right-clicking to quickly edit a file, only to be greeted by Windows 11’s minimalist context menu. The option you need? Hidden behind a “Show more options” click. Frustrating, right? As a developer, every extra click slows you down. Luckily, there’s a straightforward fix to bring back the classic, full right-click menu—no more hunting for your favorite commands.

    Why Did Microsoft Change the Menu?

    Windows 11 introduced a cleaner, more modern UI, but at the cost of burying many useful context menu options. While the intention was to reduce clutter, for power users and developers, this means extra steps for basic tasks like “Edit” or “Open with Notepad.” If you value speed and efficiency over aesthetics, restoring the full menu is a no-brainer.

    How to Restore the Full Context Menu

    You can revert to the classic right-click menu with a simple registry tweak. Here’s how:

    reg add "HKCU\Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32" /f /ve
    taskkill /f /im explorer.exe
    start explorer.exe
    

    Step-by-Step Instructions

    1. Open Command Prompt as Administrator: Hit Win + S, type “cmd”, right-click and choose “Run as administrator.”
    2. Run the Registry Command: Paste the first line above. This creates a registry key that tells Windows to use the old menu.
    3. Restart Windows Explorer: The next two commands kill and restart Explorer, applying your changes instantly.

    Practical Tips & Gotchas

    • Backup Your Registry: Registry edits are powerful but risky. Always back up before making changes.
    • Reverting the Change: To undo, simply delete the {86ca1aa0-34aa-4e8b-a509-50c905bae2a2} key in the registry and restart Explorer.
    • Windows Updates: Major updates may reset this tweak. If your menu reverts, just repeat the steps.
    • Why Not Use Third-Party Tools? Registry edits are cleaner, don’t require extra software, and are less likely to break with updates.

    Final Thoughts

    Windows 11’s streamlined context menu might look pretty, but for those of us who live and breathe efficiency, it’s a step backward. Don’t settle for extra clicks—take control and restore your workflow. If you run into trouble, drop me a line; I’m always happy to help fellow devs cut through the nonsense.

  • How to use AZ command to control VMs

    Imagine this: your boss needs a new web server spun up right now—and you’re the go-to person. You could click around in the Azure portal, but let’s be honest, that’s slow and error-prone. Real pros use the az CLI to automate, control, and dominate their Azure VMs. If you want to move fast and avoid mistakes, this guide is for you.

    Step 1: Create a Resource Group

    Resource groups are the containers for your Azure resources. Always start here—don’t be the person who dumps everything into the default group.

    az group create --name someRG --location eastus
    • Tip: Pick a location close to your users for lower latency.
    • Gotcha: Resource group names must be unique within your subscription.

    Step 2: Create a Linux VM

    Now, let’s launch a VM. Ubuntu LTS is a solid, secure choice for most workloads.

    az vm create --resource-group someRG --name someVM --image UbuntuLTS --admin-username azureuser --generate-ssh-keys
    • Tip: Use --generate-ssh-keys to avoid password headaches.
    • Gotcha: Don’t forget --admin-username—the default is not always what you expect.

    Step 3: VM Lifecycle Management

    VMs aren’t fire-and-forget. You’ll need to redeploy, start, stop, and inspect them. Here’s how:

    az vm redeploy --resource-group someRG --name someVM
    az vm start --resource-group someRG --name someVM
    az vm deallocate --resource-group someRG --name someVM
    az vm show --resource-group someRG --name someVM
    • Tip: deallocate stops billing for compute—don’t pay for idle VMs!
    • Gotcha: Redeploy is your secret weapon for fixing weird networking issues.

    Step 4: Get the Public IP Address

    Need to connect? Grab your VM’s public IP like a pro:

    az vm show -d -g someRG -n someVM --query publicIps -o tsv
    • Tip: The -d flag gives you instance details, including IPs.
    • Gotcha: If you don’t see an IP, check your network settings—public IPs aren’t enabled by default on all VM images.

    Step 5: Remote Command Execution

    SSH in and run commands. Here’s how to check your VM’s uptime:

    ssh azureuser@<VM_PUBLIC_IP> 'uptime'
    • Tip: Replace <VM_PUBLIC_IP> with the actual IP from the previous step.
    • Gotcha: Make sure your local SSH key matches the one on the VM, or you’ll get locked out.

    Final Thoughts

    The az CLI is your ticket to fast, repeatable, and reliable VM management. Don’t settle for point-and-click—automate everything, and keep your cloud under control. If you hit a snag, check the official docs or run az vm --help for more options.

  • How to install python pip on CentoOS Core Enterprise

    Imagine this: You’ve just spun up a fresh CentOS Core Enterprise server for your next big project. You’re ready to automate, deploy, or analyze—but the moment you try pip install, you hit a wall. No pip. No Python package manager. Frustrating, right?

    CentOS Core Enterprise keeps things lean and secure, but that means pip isn’t available out of the box. If you want to install Python packages, you’ll need to unlock the right repositories first. Let’s walk through the process, step by step, and I’ll share some hard-earned tips so you don’t waste time on common pitfalls.

    Step 1: Enable EPEL Repository

    The Extra Packages for Enterprise Linux (EPEL) repository is your gateway to modern Python tools on CentOS. Without EPEL, pip is nowhere to be found.

    sudo yum install epel-release

    Tip: If you’re running on a minimal install, make sure your network is configured and yum is working. EPEL is maintained by Fedora and is safe for enterprise use.

    Step 2: Install pip for Python 2 (Legacy)

    With EPEL enabled, you can now install pip for Python 2. But let’s be real: Python 2 is obsolete. Only use this if you’re stuck maintaining legacy code.

    sudo yum install python-pip

    Gotcha: This will install pip for Python 2.x. Most modern packages require Python 3. If you’re starting fresh, skip ahead.

    Step 3: Install Python 3 and pip (Recommended)

    For new projects, Python 3 is the only sane choice. Here’s how to get both Python 3 and its pip:

    sudo yum install python3-pip
    sudo pip3 install --upgrade pip

    Pro Tip: Always upgrade pip after installing. The default version from yum is often outdated and may not support the latest Python packages.

    Final Thoughts

    CentOS Core Enterprise is rock-solid, but it makes you work for modern Python tooling. Enable EPEL, choose Python 3, and always keep pip up to date. If you run into dependency errors or missing packages, double-check your repositories and consider using virtualenv for isolated environments.

    Now you’re ready to install anything from requests to flask—and get back to building something awesome.

  • How to make requests via tor in Python

    Why Route HTTP Requests Through Tor?

    Imagine you’re working on a web scraping project, and suddenly, your IP gets blocked. Or maybe you’re building a privacy-focused application where user anonymity is paramount. In both scenarios, Tor can be a game-changer. Tor (The Onion Router) is a network designed to anonymize internet traffic by routing it through multiple servers (or nodes), making it nearly impossible to trace the origin of a request.

    But here’s the catch: using Tor isn’t as simple as flipping a switch. It requires careful setup and an understanding of how to integrate it with your Python code. In this guide, I’ll walk you through two approaches to making HTTP requests via Tor: using the requests library with a SOCKS5 proxy and leveraging the stem library for more advanced control.

    🔐 Security Note: While Tor provides anonymity, it doesn’t encrypt your traffic beyond the Tor network. Always use HTTPS for secure communication.

    Setting Up Tor on Your Machine

    Before diving into the code, you need to ensure that Tor is installed and running on your machine. Here’s how you can do it:

    • Linux: Install Tor using your package manager (e.g., sudo apt install tor on Ubuntu). Start the service with sudo service tor start.
    • Mac: Use Homebrew: brew install tor, then start it with brew services start tor.
    • Windows: Download the Tor Expert Bundle from the official Tor Project website and run the Tor executable.

    By default, Tor runs a SOCKS5 proxy on 127.0.0.1:9050. We’ll use this proxy to route our HTTP requests through the Tor network.

    Method 1: Using the requests Library with a SOCKS5 Proxy

    The simplest way to route your HTTP requests through Tor is by configuring the requests library to use Tor’s SOCKS5 proxy. Here’s how:

    Step 1: Install Required Libraries

    First, ensure you have the requests library installed. If not, install it using pip:

    pip install requests[socks]

    Step 2: Create a Tor Session

    Next, create a function to configure a requests session to use the SOCKS5 proxy:

    import requests
    
    def get_tor_session():
        session = requests.session()
        session.proxies = {
            'http': 'socks5h://127.0.0.1:9050',
            'https': 'socks5h://127.0.0.1:9050'
        }
        return session
    

    Notice the use of socks5h instead of socks5. The socks5h scheme ensures that DNS resolution is performed through the Tor network, adding an extra layer of privacy.

    Step 3: Test Your Tor Session

    To verify that your requests are being routed through Tor, you can make a request to a service that returns your IP address:

    session = get_tor_session()
    response = session.get("http://httpbin.org/ip")
    print("Tor IP:", response.text)
    

    If everything is set up correctly, the IP address returned by httpbin.org should differ from your actual IP address.

    💡 Pro Tip: If you encounter issues, ensure that the Tor service is running and listening on 127.0.0.1:9050. You can check this by running netstat -an | grep 9050 (Linux/Mac) or netstat -an | findstr 9050 (Windows).

    Method 2: Using the stem Library for Advanced Control

    While the requests library with a SOCKS5 proxy is straightforward, it doesn’t give you much control over the Tor connection. For more advanced use cases, such as changing your IP address programmatically, the stem library is a better choice.

    Step 1: Install the stem Library

    Install stem using pip:

    pip install stem

    Step 2: Connect to the Tor Controller

    The Tor controller allows you to interact with the Tor process, such as requesting a new identity. Here’s how to connect to it:

    from stem.control import Controller
    
    with Controller.from_port(port=9051) as controller:
        controller.authenticate(password='your_password')  # Replace with your control port password
        print("Connected to Tor controller")
    

    By default, the Tor control port is 9051. You may need to configure a password in your torrc file to enable authentication.

    ⚠️ Gotcha: If you see an authentication error, ensure that the ControlPort and HashedControlPassword options are set in your torrc file. Restart the Tor service after making changes.

    Step 3: Change Your IP Address

    To request a new IP address, send the SIGNAL NEWNYM command to the Tor controller:

    from stem import Signal
    from stem.control import Controller
    
    with Controller.from_port(port=9051) as controller:
        controller.authenticate(password='your_password')
        controller.signal(Signal.NEWNYM)
        print("Requested new Tor identity")
    

    Step 4: Make a Request via Tor

    Combine the stem library with the requests library to make HTTP requests through Tor:

    import requests
    from stem import Signal
    from stem.control import Controller
    
    def get_tor_session():
        session = requests.session()
        session.proxies = {
            'http': 'socks5h://127.0.0.1:9050',
            'https': 'socks5h://127.0.0.1:9050'
        }
        return session
    
    with Controller.from_port(port=9051) as controller:
        controller.authenticate(password='your_password')
        controller.signal(Signal.NEWNYM)
    
        session = get_tor_session()
        response = session.get("http://httpbin.org/ip")
        print("New Tor IP:", response.text)
    

    Performance Considerations

    Routing requests through Tor can significantly impact performance due to the multiple hops your traffic takes. In my experience, response times can range from 500ms to several seconds, depending on the network’s current load.

    💡 Pro Tip: If performance is critical, consider using a mix of Tor and direct connections, depending on the sensitivity of the data you’re handling.

    Security Implications

    While Tor enhances anonymity, it doesn’t guarantee complete security. Here are some key points to keep in mind:

    • Always use HTTPS to encrypt your data.
    • Be cautious of exit nodes, as they can see unencrypted traffic.
    • Regularly update your Tor installation to patch security vulnerabilities.
    🔐 Security Note: Avoid using Tor for illegal activities. Law enforcement agencies can still trace activity under certain conditions.

    Conclusion

    Integrating Tor into your Python projects can unlock powerful capabilities for anonymity and bypassing restrictions. Here’s a quick recap:

    • Use the requests library with a SOCKS5 proxy for simplicity.
    • Leverage the stem library for advanced control, such as changing your IP address.
    • Always prioritize security by using HTTPS and keeping your Tor installation up to date.

    What use cases are you exploring with Tor? Share your thoughts in the comments below!

  • How to get html code from console of a website

    Hook: The Power of the Browser Console

    Imagine this: you’re debugging a website late at night, and something isn’t rendering correctly. The CSS looks fine, the JavaScript isn’t throwing errors, but the page still isn’t behaving as expected. You suspect the issue lies in the generated HTML structure, but how do you quickly inspect or copy the entire HTML of the page? The answer lies in a tool that’s already at your fingertips: the browser console. Whether you’re a developer troubleshooting a bug, a designer analyzing a competitor’s layout, or a curious learner diving into web development, knowing how to extract a webpage’s HTML directly from the browser console is an essential skill.

    In this article, we’ll go beyond the basics of using document.documentElement.outerHTML. We’ll explore practical use cases, show you how to handle large HTML outputs, discuss security implications, and even touch on automating this process with scripts. By the end, you’ll not only know how to grab HTML from the console but also how to use this knowledge effectively and responsibly.

    Understanding document.documentElement.outerHTML

    The document.documentElement.outerHTML property is a JavaScript method that returns the entire HTML structure of the current webpage as a string. This includes everything from the opening <html> tag to the closing </html> tag. It’s a quick and straightforward way to access the full DOM (Document Object Model) representation of a page.

    Here’s a simple example:

    // Retrieve the entire HTML of the current page
    const html = document.documentElement.outerHTML;
    console.log(html);
    

    When you run this in your browser’s console, it will output the full HTML of the page. But before we dive into the “how,” let’s address an important topic: security.

    🔐 Security Note: Be cautious when running code in the browser console, especially on untrusted websites. Malicious scripts can exploit the console to trick users into executing harmful commands. Always verify the code you’re running and avoid pasting unknown scripts into the console.

    Step-by-Step Guide to Extracting HTML

    Let’s walk through the process of extracting HTML from a webpage using the browser console. We’ll include tips and tricks to make the process smoother.

    1. Open the Browser Console

    The first step is to access the browser’s developer tools. Here’s how to do it in popular browsers:

    • Chrome: Press F12 or Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (Mac).
    • Firefox: Press F12 or Ctrl+Shift+K (Windows/Linux) or Cmd+Option+K (Mac).
    • Edge: Press F12 or Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (Mac).
    • Safari: Enable “Develop” mode in Preferences, then press Cmd+Option+C.

    2. Run the Command

    Once the console is open, type the following command and press Enter:

    document.documentElement.outerHTML

    The console will display the entire HTML of the page. You can scroll through it, copy it, or save it for later use.

    💡 Pro Tip: If the output is too long and gets truncated, use console.log(document.documentElement.outerHTML) instead. This ensures the full HTML is displayed in a scrollable format.

    3. Copy the HTML

    To copy the HTML, right-click on the output in the console and select “Copy” or use the keyboard shortcut Ctrl+C (Windows/Linux) or Cmd+C (Mac). Paste it into a text editor for further analysis or modification.

    Handling Large HTML Outputs

    For complex websites with large DOM structures, the HTML output can be overwhelming. Here are some strategies to manage it:

    1. Save to a File

    Instead of copying the HTML manually, you can save it directly to a file using the following code:

    // Create a Blob and download the HTML as a file
    const html = document.documentElement.outerHTML;
    const blob = new Blob([html], { type: 'text/html' });
    const url = URL.createObjectURL(blob);
    
    const a = document.createElement('a');
    a.href = url;
    a.download = 'page.html';
    a.click();
    
    URL.revokeObjectURL(url);
    

    This script creates a downloadable file named page.html containing the full HTML of the page. It’s especially useful for archiving or sharing.

    2. Extract Specific Elements

    If you’re only interested in a specific part of the page, such as the <body> or a particular div, you can target it directly:

    // Get the HTML of the  tag
    const bodyHtml = document.body.outerHTML;
    console.log(bodyHtml);
    
    // Get the HTML of a specific element by ID
    const elementHtml = document.getElementById('myElement').outerHTML;
    console.log(elementHtml);
    
    💡 Pro Tip: Use browser extensions like “SelectorGadget” to quickly find the CSS selectors for specific elements on a page.

    Automating HTML Extraction

    If you need to extract HTML from multiple pages, consider automating the process with a headless browser like Puppeteer. Here’s an example:

    // Puppeteer script to extract HTML from a webpage
    const puppeteer = require('puppeteer');
    
    (async () => {
      const browser = await puppeteer.launch();
      const page = await browser.newPage();
      await page.goto('https://example.com');
    
      const html = await page.evaluate(() => document.documentElement.outerHTML);
      console.log(html);
    
      await browser.close();
    })();
    

    This script launches a headless browser, navigates to a specified URL, and logs the full HTML of the page. It’s a powerful tool for web scraping and automation.

    Security and Ethical Considerations

    While extracting HTML is a legitimate technique, it’s important to use it responsibly. Here are some guidelines:

    • Respect copyright and intellectual property laws. Don’t use extracted HTML to replicate or steal content.
    • Follow website terms of service. Some sites explicitly prohibit scraping or automated data extraction.
    • Avoid running untrusted scripts in the console. Always verify the source of the code.
    ⚠️ Gotcha: Some websites use obfuscation or dynamically generate HTML with JavaScript, making it harder to extract meaningful content. In such cases, tools like Puppeteer or browser extensions may be more effective.

    Conclusion

    Extracting HTML from a webpage using the browser console is a simple yet powerful technique that every developer should know. Here’s a quick recap:

    • Use document.documentElement.outerHTML to retrieve the full HTML of a page.
    • Handle large outputs with console.log or save the HTML to a file.
    • Target specific elements to extract only the content you need.
    • Automate the process with tools like Puppeteer for efficiency.
    • Always consider security and ethical implications when extracting HTML.

    Now it’s your turn: What creative uses can you think of for this technique? Share your thoughts and experiences in the comments below!

  • Setup latest Elastic Search and Kibana on CentOS7 in April 2022

    Imagine this: your boss walks in and says, “We need real-time search and analytics. Yesterday.” You’ve got a CentOS 7 box, and you need Elasticsearch and Kibana running—fast, stable, and secure. Sound familiar? Good. Let’s get straight to business.

    Step 1: Prerequisites—Don’t Skip These!

    Before you touch Elasticsearch, make sure your server is ready. These steps aren’t optional; skipping them will cost you hours later.

    • Set a static IP:

      sudo vi /etc/sysconfig/network-scripts/ifcfg-ens3

      Tip: Double-check your network config. A changing IP will break your cluster.

    • Set a hostname:

      sudo vi /etc/hostname

      Opinion: Use meaningful hostnames. “node1” is better than “localhost”.

    • (Optional) Disable the firewall:

      sudo systemctl disable firewalld --now

      Gotcha: Only do this in a trusted environment. Otherwise, configure your firewall properly.

    • Install Java (Elasticsearch needs it):

      sudo yum install java-1.8.0-openjdk.x86_64 -y

      Tip: Elasticsearch 8.x bundles its own JVM, but installing Java never hurts for troubleshooting.

    Step 2: Install Elasticsearch 8.x

    Ready for the main event? Let’s get Elasticsearch installed and configured.

    1. Import the Elasticsearch GPG key:

      sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    2. Add the Elasticsearch repo:

      sudo vi /etc/yum.repos.d/elasticsearch.repo
      [elasticsearch]
      name=Elasticsearch repository for 8.x packages
      baseurl=https://artifacts.elastic.co/packages/8.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md

      Tip: Set enabled=0 so you only use this repo when you want to. Avoid accidental upgrades.

    3. Install Elasticsearch:

      sudo yum install --enablerepo=elasticsearch elasticsearch -y
    4. Configure Elasticsearch:

      sudo vi /etc/elasticsearch/elasticsearch.yml
      node.name: "es1"
      cluster.name: cluster1
      script.allowed_types: none

      Opinion: Always set node.name and cluster.name. Defaults are for amateurs.

    5. Set JVM heap size (optional, but recommended for tuning):

      sudo vi /etc/elasticsearch/jvm.options
      -Xms4g
      -Xmx4g

      Tip: Set heap to half your available RAM, max 32GB. Too much heap = slow GC.

    6. Enable and start Elasticsearch:

      sudo systemctl enable elasticsearch.service
      sudo systemctl start elasticsearch.service
    7. Test your installation:

      curl -X GET 'http://localhost:9200'

      Gotcha: If you get a permission error, check SELinux or your firewall.

    Step 3: Install and Configure Kibana

    Kibana is your window into Elasticsearch. Let’s get it running.

    1. Add the Kibana repo:

      sudo vi /etc/yum.repos.d/kibana.repo
      [kibana-8.x]
      name=Kibana repository for 8.x packages
      baseurl=https://artifacts.elastic.co/packages/8.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      autorefresh=1
      type=rpm-md

      Tip: Keep enabled=1 for Kibana. You’ll want updates.

    2. Install Kibana:

      sudo yum install kibana -y
    3. Generate the enrollment token (for secure setup):

      bin/elasticsearch-create-enrollment-token -s kibana

      Gotcha: Save this token! You’ll need it when you first access Kibana.

    4. Reload systemd and start Kibana:

      sudo systemctl daemon-reload
      sudo systemctl enable kibana.service
      sudo systemctl restart kibana.service

      Tip: Use restart instead of start to pick up config changes.

    Final Thoughts: Don’t Get Burned

    • Security: Elasticsearch 8.x is secure by default. Don’t disable TLS unless you know exactly what you’re doing.
    • Memory: Monitor your heap usage. Elasticsearch loves RAM, but hates swap.
    • Upgrades: Always test upgrades in a staging environment. Elasticsearch upgrades can be breaking.

    If you followed these steps, you’re ready to build powerful search and analytics solutions. Don’t settle for defaults—tune, secure, and monitor your stack. Any questions? I’m Max L, and I don’t believe in half-measures.

  • Python: Azure Service Bus Without SDK (REST API Guide)

    Want to send and receive notifications on Azure Service Bus using Python, but don’t want to rely on the official SDK? This guide shows you how to authenticate and interact with Azure Service Bus queues directly using HTTP requests and SAS tokens. Let’s dive in!

    Azure Service Bus (ASB) uses Azure Active Directory (AAD) or Shared Access Signature (SAS) tokens for authentication. In this example, we assume you have owner access and can generate a Send/Listen SAS key from the Azure Portal. Here’s how to create a valid SAS token:

    def get_auth_token(sb_name, eh_name, sas_name, sas_value):
        # generate SAS token
        uri = "https://{}.servicebus.windows.net/{}".format(sb_name, eh_name)
        sas = sas_value.encode('utf-8')
        expiry = str(int(time.time() + 10000))
        string_to_sign = (urllib.parse.quote_plus(uri) + 'n' + expiry).encode('utf-8')
        signed_hmac_sha256 = hmac.HMAC(sas, string_to_sign, hashlib.sha256)
        signature = urllib.parse.quote(base64.b64encode(signed_hmac_sha256.digest()))
        return  {"uri": uri,
                 "token":'SharedAccessSignature sr={}&sig={}&se={}&skn={}'
                         .format(uri, signature, expiry, sas_name)
                }

    Once you have generated the token, sending and receiving messages is straightforward. Below is a complete code snippet that generates a SAS token and sends your machine’s IP address via Azure Service Bus.

    import time
    import urllib
    import hmac
    import hashlib
    import base64
    import requests
    import socket
    
    h_name = socket.gethostname()
    IP_address = socket.gethostbyname(h_name)
    
    def get_auth_token(sb_name, eh_name, sas_name, sas_value):
        # generate SAS token
        uri = "https://{}.servicebus.windows.net/{}".format(sb_name, eh_name)
        sas = sas_value.encode('utf-8')
        expiry = str(int(time.time() + 10000))
        string_to_sign = (urllib.parse.quote_plus(uri) + 'n' + expiry).encode('utf-8')
        signed_hmac_sha256 = hmac.HMAC(sas, string_to_sign, hashlib.sha256)
        signature = urllib.parse.quote(base64.b64encode(signed_hmac_sha256.digest()))
        return  {"uri": uri,
                 "token":'SharedAccessSignature sr={}&sig={}&se={}&skn={}'
                         .format(uri, signature, expiry, sas_name)
                }
    
    def send_message(token, message):
        # POST http{s}://{serviceNamespace}.servicebus.windows.net/{queuePath}/messages
        r = requests.post(token['uri'] + "/messages",
            headers={
                "Authorization": token['token'],
                "Content-Type": "application/json"
            },
            json=message)
    
    def recieve_message(token): 
        # DELETE http{s}://{serviceNamespace}.servicebus.windows.net/{queuePath}/messages/head
        # 204 if no message
        r = requests.delete(token['uri'] + "/messages/head",
            headers={
                "Authorization": token['token'],
            })
        return r.text
    
    sb_name = "<service bus name>"
    q_name = "<service bus queue name>"
    
    skn = "<key name for that access key>"
    key = "<access key created in portal>"
    
    token = get_auth_token(sb_name, q_name, skn, key)
    print(token['token'])
    
    uri = "https://" + sb_name + ".servicebus.windows.net/" + q_name + "/messages"
    
    send_message(token, {'ip': IP_address})
    recieve_message(token)
  • Simple Tips to improve C# ConcurrentDictionary performance

    Looking to boost the performance of your C# ConcurrentDictionary? Here are practical tips that can help you write more efficient, scalable, and maintainable concurrent code. Discover common pitfalls and best practices to get the most out of your dictionaries in multi-threaded environments.

    Prefer Dictionary<>

    The ConcurrentDictionary class consumes more memory than the Dictionary class due to its support for thread-safe operations. While ConcurrentDictionary is essential for scenarios where multiple threads access the dictionary simultaneously, it’s best to limit its usage to avoid excessive memory consumption. If your application does not require thread safety, opt for Dictionary instead—it’s more memory-efficient and generally faster for single-threaded scenarios.

    Use GetOrAdd

    Minimize unnecessary dictionary operations. For instance, if you’re adding items and don’t need to check for their existence, use TryAdd rather than Add. TryAdd skips existence checks, making bulk additions more efficient. To prevent adding duplicate items, use GetOrAdd, and for removals, TryRemove avoids pre-checking for item existence.

    if (!_concurrentDictionary.TryGetValue(cachedInstanceId, out _privateClass))
    {
        _privateClass = new PrivateClass();
        _concurrentDictionary.TryAdd(cachedInstanceId, _privateClass);
    }
    

    The code above misses the advantages of ConcurrentDictionary. The recommended approach is:

    _privateClass = _concurrentDictionary.GetOrAdd(new PrivateClass());
    

    Set ConcurrencyLevel

    By default, ConcurrentDictionary uses a concurrency level of four times the number of CPU cores, which may be excessive and impact performance, especially in cloud environments with variable core counts. Consider specifying a lower concurrency level to optimize resource usage.

    // Create a concurrent dictionary with a concurrency level of 2
    var dictionary = new ConcurrentDictionary<string, int>(2);
    

    Keys and Values are Expensive

    Accessing ConcurrentDictionary.Keys and .Values is costly because these operations acquire locks and construct new list objects. Instead, enumerate KeyValuePair entries directly for better performance.

    // Create a concurrent dictionary with some initial data
    var dictionary = new ConcurrentDictionary<string, int>
    {
        { "key1", 1 },
        { "key2", 2 },
        { "key3", 3 },
    };
    
    // Get the keys from the dictionary using the KeyValuePairs property and the ToArray method
    string[] keys = dictionary.KeyValuePairs.ToArray(pair => pair.Key);
    
    // Get the values from the dictionary using the KeyValuePairs property and the ToArray method
    int[] values = dictionary.KeyValuePairs.ToArray(pair => pair.Value);
    

    Use ContainsKey Before Lock Operations

    if (this._concurrentDictionary.TryRemove(itemKey, out value))
    {
        // some operations
    }
    

    Adding a non-thread-safe ContainsKey check before removal can significantly improve performance:

    if (this._concurrentDictionary.ContainsKey(itemKey))
    {
        if (this._concurrentDictionary.TryRemove(itemKey, out value))
        {
            // some operations
        }
    }
    

    Avoid ConcurrentDictionary.Count

    The Count property in ConcurrentDictionary is expensive. For a lock-free count, wrap your dictionary and use Interlocked.Increment for atomic updates. This is ideal for tracking items or connections in a thread-safe manner.

    public class Counter
    {
        private int count = 0;
    
        public void Increment()
        {
            // Increment the count using the Interlocked.Increment method
            Interlocked.Increment(ref this.count);
            // or
            // Interlocked.Decrement(ref this.count);
        }
    
        public int GetCount()
        {
            return this.count;
        }
    }