Automatic installation of the popular online radio automation system Airtime, has stuck in Ubuntu 12.04. But, the installation to Ubuntu 14.04 or Debian 7 is straight forward and easy. Sourcefabric corrected some issues with language settings and others, that need tricks to install pervious versions of the software, but again, the installation script has some errors. This is the easy way for installation of airtime, I installed it this way in an Ubuntu 14.04 server.
First of all, install git to be able to clone a repo
There is an error in sourcefabric installation script about the database. You have to manually install PostgreSQL berore running the install script. If you first run the install, don’t worry, just install porstgre and rerun airtime installation
apt-get install postgresql postgresql-contrib
Now, you are ready to install Airtime:
sudo ~/airtime-22.214.171.124/install -fiap
or go to airtime folder and install
sudo ./install -fiap
Open your browser to your ip or domain, no special ports needed, and follow the instructions. Follow all the details from airtime installation, it is pretty forward! You probably will se an error in the end, stating that some services are not running. Just rerun those services following those commands:
service airtime-media-monitor start
service airtime-liquidsoap restart
service airtime-playout restart
There is no statement about the default login and password for your server. Enter in both fields “admin” and when logged in, don’t forget to change the values 😛
$ # discover supported formats
$ youtube-dl -F 'some youtube url'
$ # download one format and specify a file name
$ youtube-dl -f 18 -o 'blabla.mp4' 'some youtube url'
$ # download a playlist
$ youtube-dl -f 18 'some youtube url'
youtube-dl has been pretty bulletproof for me. I’m still an iPod Classic man, and nearly every week I dump some URLs in a script and have it download them for syncing.
It handles playlists quite well – just give it the playlist URL and it’ll suck down all videos in the playlist. Unfortunately, the sequence isn’t preserved (e.g., they’ll all just be named the video name, not 1, 2, 3, etc.)
Your distro’s version of youtube-dl is worthless. It’s sensitive to HTML changes, so if a download fails, your first step is to either run youtube-dl -u (update) or manually get the latest from github. Your distro’s youtube-dl is hopelessly out of date.
There isn’t a magic “format = mp3” flag, but it’s trivial to download in one of the audio-only formats and run it through a converter. See below
For wholesaleinternet nocix, etc. do not provide KVM, unable to install windows stand-alone server
Test environment: Intel Core2Duo Preconfigured, atom330 / 2G / 500G / RTL8102E
Integrated Drivers: Realtek 7.092.0115.2015 WHQL (RTL81XX)
Please be sure to use SecurtCRT operation, the use of Xshell, putty at your own risk ~
The third step is to restore the system to the hard disk, waiting for about 1 hour (in urgent need ~)
dd if=WSI2008 of=/dev/sda
Appears as shown, shows that the system is good ~
wholesaleinternet wintry install win system tutorial tutorial third
The fourth step, enter the WSI background restart the machine, wait for five minutes, select sysrcd 4.3.1 redo system
The fifth step, after the mount is complete, enter the sysrcd through the console recovery CD password
Make sure SecurtCRT’s “Options – Session Options – Appearance – Character Encoding” is UTF-8, if not, change over, disconnect and reconnect (top priority)
The sixth step, execute the following command to install ntfs partition identification tool
tar zxf ntfs-3g_ntfsprogs-2017.3.23.tgz
The seventh step, temporary mount windows partition
mount -t ntfs-3g /dev/sda2 /mnt/123
(Note, according to fdisk select partition here, there may be sda2, it may be sda1)
The eighth step, enter the system tray
cd "Users/Administrator/AppData/Roaming/Microsoft/Windows/Start Menu/Programs/Startup"
rm -rf *.bat desktop.ini
Implementationv1.bat create a new batch, according to the contents of the modified paste up, save, execute reboot reboot the system
ping -n 300127.0.0.1>nul
Netsh interface IP Set Addr "本地连接 3" Static 126.96.36.199255.255.255.252188.8.131.52
The ninth step, wait about 15 minutes (install the driver), remote system, delete the startup item in the batch, close the work ~ password 3.14159265351ztr
This is my first post here about OpenVZ, and this is a tutorial about how to run LXC containers inside an Low Budget-style OpenVZ VPS. It’s a cool thing to toy with and sometimes useful, but due to some OpenVZ limitations, its how-to doesn’t seem to be readily available on the internet. In this tutorial, I’ll show how to run an Alpine Linux container inside an OpenVZ VPS.
Why not Docker? Although OpenVZ supports running Docker inside CT, it requires veth and bridge kernel modules which are not made available by most VPS providers. Besides, Docker is glorified and consumes too much resource.
Be aware that some providers does not allow “nested virtualization.” Whether running LXC is a violation of AUP is totally dependent on the definition of virtualization. Running LXC containers incurs very little overhead, and it’s one thing to run LXC and another thing entirely to run QEMU.
The following example assumes the distribution on your OpenVZ VPS is Arch Linux. Although most (if not all) OpenVZ providers don’t provide this option, it takes only few commands and minutes to convert any VPS into Arch.
Pilar is a software engineer at Crowdbotics specializing in chatbots, test automation, and frontend development.
So, you’re here because you want to create your first chatbot. Or maybe you’ve already built one, and you want to know how you can improve your chatbot’s responses with NLP.
Today, I’m going to show you exactly how to do that! We’ll build a chatbot with rich NLP features using Dialogflow, Wit.ai, and Node.js. The best part? It’ll take us just one day.
Let’s review. What is a chatbot?
Chatbots are computer programs that mimic conversation with people through the use of artificial intelligence.
In a nutshell, when someone types in a message, the bot responds with the right reply.
Out of the thousands of chatbots that exist today, most are messenger apps designed to communicate with customers. They work with Natural Processing Language (NLP) systems which translate everyday language into a form that the bot can understand and learn new interactions from.
Through the help of that technology, bots now hold endless possibilities. You can use them to read everyday news, get the weather, transfer money, buy your favorite items, schedule a meeting, or even get service assistance. And all from the convenience of your favorite messenger app.
People are now spending more mobile screen time on messaging than on social media. A lot of companies have noticed the trend and are taking advantage of chatbots as a new channel to talk to us. Turns out that a whopping 60% of online adults in the US use online messaging, voice or video chat services.
We’re choosing between two popular platforms when building our chatbot: Dialogflow and wit.ai.
Dialogflow (once known as Api.ai) is a service owned by Google that allows developers to build speech to text, natural language processing and artificially intelligent systems that you can train with your own custom functionality. This incredible tool uses machine learning to understand what users are saying and it’s beyond simple to set up nonlinear bots quickly.
It’s Useful: It integrates NLP without much hassle.
It’s Easy: Dialogflow contains lots of pre-built agents that are a breeze to activate.
It’s Integrated: You can connect your favorite platforms such as Facebook, Twitter, Slack, Telegram, etc…
It’s Multilingual: It recognizes more than 15 languages.
It’s Cheap: You’ll be glad to know it’s totally free.
It’s Not So Customizable: If you want to create a customizable bot, you will need to implement a code flow, and these examples are not shown in any of its documentation.
It’s Not So Implementable: It can be tricky to figure out how to perform platforms implementations as it is not documented.
Wit.ai (owned by Facebook) works similarly to Dialogflow: it also processes human speech patterns and filters useful data like intent and context from it. Like Dialogflow, it provides a UI to help developers with creating intents, entities and agents.
What Are Its Pros?
It’s Also Useful: Wit.ai also integrates NLP.
It’s Also Easy: The Quickstart tutorial is very practical in order to get started.
It’s Also Integrated: You’ll also be able to integrate with several platforms, like Facebook, Twitter, Slack, Telegram, etc…
It’s Also Cheap: Totally free!
It’s Adaptable: You’ll be able to build your bot with Node.js, Python or Ruby.
It’s Pragmatic: Enjoy an easy to read “Recipes” section for common problems and how you can solve them.
What Are Its Cons?
It’s Not-So Fast: Since the learning curve is steep, you’ll need to invest time into figuring out how to implement it.
It’s Not-So Visual: Given that there’s no visual development environment, you’ll need to be comfortable with code.
So which did we pick?
Given that Dialogflow provides junior developers with the best documentation and a great user experience for developing a bot without being an expert in the field, we’ve decided to give it a shot for this tutorial.
A step-by-step guide to building the chatbot
In this tutorial, we’ll be using Node.js to build a simple bot, so please make sure it’s installed on your computer.
So let’s get started!
Step 1: Setting up your development environment
Let’s create a simple webserver with one webhook endpoint. I’ll use Express.js.
1. Writing webhook server with Express
First of all, we need to know that a webhook (also called a web callback or HTTP push API) is a way for an app to provide other applications with real-time information. It delivers data to other applications as it happens, meaning you get data immediately, unlike typical APIs where you would need to poll for data very frequently in order to get it in real-time.
In order to get started, you will have to create a new directory, where you’ll store your entire project. We are going to name it “Bot Tutorial”.
Once, you have created it, go to your terminal, access this directory and initialize your Node.js app with:
After filling out all the needed info (if you don’t know how to fill package.json, take a look here), your next step will be to install Express to setup a server and one middleware for it, called body-parser to parse incoming request bodies. So, in your terminal type:
npm install express body-parser --save
Once the installation is complete, go to your directory and create a file called index.js and start Express server listening to the port 3000 (you can take any you want).
app.listen(3000, () => console.log(‘Webhook server is listening, port 3000’));
Save it and let’s check if it’s working. Run this command in your terminal:
And if everything is working how it should be, you will receive the following message in your terminal:
Great, our server is listening! Now we are going to create two endpoints:
One will be for Facebook initial verification. The reason why we need to do this is because when you’re connecting a Facebook page with the webhook server you will also need have a token. Facebook will make a request and match this token to that one webhook response to be sure that you’re not connecting your page with a random webhook server.
The second one will be responsible for the other messages from Facebook Messenger.
But before we go any further, we will organize our code in two separate folders: controllers and helpers. For each separate endpoint we’ll create a function in a separate file in the controllers folder.
The first verification endpoint goes to controllers/verification.js.
Here you can see a string called “crowdbotics” . You can change it and choose any word or text string that you prefer. Make sure to make a note of it as you will need it later when setting up your Facebook app.
The second endpoint for handling all the Facebook bot messages will go to controllers/messageWebhook.js.
Now, if you left your console running with the port 3000, stop it and rewrite this command in there:
2. Setting up a proxy server with ngrok
Why do we need this step? Well, our local Express server url is not available for everyone on the Internet and also, it doesn’t support HTTPS protocol, which is necessary for Facebook Messenger bots. We will therefore set up a grok server as a proxy.
Ngrok is a multiplatform tunnelling, reverse proxy software that establishes secure tunnels from a public endpoint such as the Internet to a locally running network service while capturing all traffic for detailed inspection and replay.
After that you’ll need to create an app. Go to developers.facebook.com/quickstarts, give your Facebook app a name, type in your e-mail, and then click the “Create App ID” button.
After creating the App, you have to select a product. Click the“Messenger” icon and then click on the “Set Up” button. This will redirect you to the Messenger Platform.
Once you’re there, you must locate the “Token Generation” section. Select the page you already created, and it will give you a Page Access Token that we will use later.
Below this section is the Webhooks section. Click on “Setup Webhooks” and it will show you a popup window, where you’ll need to fill out the following:
Callback URL: With your ngrok URL.
Verify Token: The string for validation that you already chose from controller/verification.js.
Subscription Fields: Choose messages and messaging_postbacks. If you want to know more about webhook events read this information.
Click “Verify and Save” button.
Note: If the callback URL sent you a “502 Bad Gateway error”, it is because you aren’t running your local host server and the ngrok at the same time.
Until this moment our Facebook Application is well connected and working correctly, but we aren’t quite finished yet.
Step 2: Dialogflow integration
To get started we’ll head to the Dialogflow website and click the “Sign up for free” button. We’re then taken to a registration page, where you can log in with your Google Account.
Once you’re in, you need to click Allow to grant Dialogflow access to your Google account in the screen that follows, and accept terms of service. And done! Now you’re in a Dialogflow interface!
It’s important that you watch the “Get Started’ video. Here, the Dialogflow team explains in general terms how the platform works.
After this point, you can start prepping your virtual AI assistant. An agent is an assistant that you create and teach specific skills to. So, to begin with it, click on the “Create Agent” button. You may need to authorize Dialogflow again to have additional permissions for your Google account. This is normal, so click “Authorize”.
On the next screen, we have our agent’s details:
Agent Name: This is for your own reference so that you can differentiate agents on your interface. You can choose any name you want.
Description: A readable description, so you can remember what the agent does. This is optional.
Language: The language which the agent works in. You need to choose right, because this cannot be changed. For this tutorial we’re going to work with English.
Time zone: The time zone you want your agent to be in.
Then just click the “Save” button.
Step 3: Integrating Small Talk Agent To Your Bot
After you saved the main requirements, Dialogflow will redirect you to your bot main page. Here, go to the section called “Small Talk”.
Click on “Enable” button. There, you can try your bot in the Console over on the right side. Type whatever you want like “Hello, how are you?”, “ Who are you?”, etc… and you will see a response.
It works! You’re talking to a bot! Now the next step is so embed this conversation into our code base.
Before that, you have to install two different packages on your terminal:
First, we have to install a request node package to be able to send requests to Facebook:
npm install —- save request
Second, we have to install Dialogflow node.js package, but until now the Api.ai package is still working:
npm install —- save apiai
Once you are at this stage, implement the processMessage function. To do that, create a file at helpers directory and name it processMessage.js.
First, we have to initialize apiaiClient with the API key, to retrieve it, click the Configuration icon (gear) at the left menu and copy “Client Access Token”.
Now we have to implement processMessage function. Add your Client Access Token in const API_AI_TOKEN and your Facebook Page Access Token in const FACEBOOK_ACCESS_TOKEN.
If you look at the code, your webhook server received the information from a user via Facebook Messenger, then passed the text content to Dialogflow. Once it responded, the response event was triggered, and the result was sent back to Facebook Messenger.
Before testing our bot, it is very important that you know when your app is in Development Mode. Plugin and API functionality will only work for admins, developers and testers of the app. After your app is approved and public, it will work for the general public.
To add a tester, you just need to go back to your Facebook App, find the “Roles” section and click on it. Then in the “Testers” section, click “Add Testers” button, it will throw you a popup window where you can add the Facebook ID or username for the person you want test your bot. Finally, click “Submit” button and the person can send a message to your bot throw your Facebook page.
Now, all that’s left is to test our bot. If your server was running, turn it off and run index.js again. If all is well with your bot, you should get replies from the bot for some simple questions like “How are you?”, “Who are you?”, etc…
But guess what — this is just the beginning. There are so many other things that you can do with your bot.
As we mentioned before, an agent (your bot) is essentially the container or project and it contains intents, entities, and the responses that you want to deliver to your user.
Intents are the mechanisms that pick up what your user is requesting (using entities) and direct the agent to respond accordingly.
Step 4: Creating an Intent
In order to create an intent, log into the agent you’d like to add the new functionality to. Go to the Dialogflow console, locate the section of intents, and click on the “+” button.
The sample intent for this agent is to teach it to answer simple questions. In this case we’re going ask him about Crowdbotics.
To start, give a name to your intent. It is recommendable that you choose a name that describes what you want your intent do. In this case we will name it the “crowdbotics-objective”.
Then in the “User says” section, add your first trigger sentence. As the name suggests, this is going to be how a user can ask your bot about something and why you need to add several examples for the bot to learn from.
To add a sentence, you just need to type it and the hit “Enter”.
Now there’s a range of sentences the agent should understand, but you haven’t told it what action is expected when it hears them. To do so, create an “action”. It will need to be all lowercase with no spaces.
After your user has told the agent what it is expecting from it, you need to add some responses. To do that, go to the “Response” section and there you can add some different responses for your users questions.
Finally click the “Save” button next to your intent name to save the entire progress.
So now, one thing is missing — you have to try out your agent!
You can test your new intent by typing a test statement into the Console on the right. Type a question similar to one you already added the in “User says” section and watch the response.
As you can see, our agent responds back with one of your trained responses. You can see the power of machine learning in action since even if you enter a question that you didn’t define, the agent knows how to interpret it and return a response.
Remember, the more statements you add, the better your agent will be able to respond.
Finally, let’s going try on Facebook Messenger.
And it is working! So, now you can create a simple but very efficient chatbot. You can add as much interactivity as you want. You can have your bot work with entities, make API calls, voice recognition, etc…
Thanks for reading! As you can see, creating a chatbot doesn’t have to be as daunting as it seems.
Have you build a beautiful website and are looking to host it for free with a custom domain?
Then look no further.
This article will explain how to get your website up using two great free tools: Github Pages and Cloudflare.
Before we get started, lets go over some of the basics:
Cloudflare is a CDN – a content distribution network. It mirrors your website on its servers all over the world. That means it’s faster for anyone who wants to access it, no matter where they are. As a bonus, it also protects you against people who might want to overload your site with automated bots trying to visit it and drain your bandwidth (DDOS attacks).
You can read more about how they describe themselves here.
There are several reasons to use Cloudflare. It’s free. It has a simple DNS manager that lets you set up mail and subdomains. It’s got built-in HTTPS domain management. It automatically minifies static assets of your website, speeds up how people all over the world accessing your website, and protects against downtime. You can see their panel of options to set right here:
About GitHub Pages:
GitHub is best known for being a code repository, and GitHub Pages was originally designed as a way for open-source projects to host pages about themselves. Since its release, it’s grown into a highly versatile platform for hosting content in a production setting. It’s reliable, robust, fast, and great for serving most kinds of corporate and personal static sites. Their own description puts it best: “GitHub Pages is a static site hosting service. It is designed to host your personal, organization, or project pages directly from a GitHub repository.”
Custom domain you’ve purchased from a registry like NameCheap
Step 1: Deploy your static website using Github Pages.
We should have a Github repository and a deployment environment using Github Pages. We deploy using Github Pages when we make a push to “gh-pages” branch.
Step 2: Insert your custom domain in Github repository settings
Select the option “settings” from the repository’s navbar. It’s the last option. When you are in “settings”, scroll to Github Pages area and insert your custom domain and click on the “save” button.
Step 3: Setup your custom domain on Cloudflare
Log in to your Cloudflare account and insert your custom domain to scan DNS Records.
After you click on “Scan DNS Records” button, there will be a progress bar. You should click on “continue button” when the progress bar will finish. Then insert the necessary DNS and CNAME. We will get this structure:
The A and CNAME records are the two common ways to map a hostname to one or more IP address.
The A record points a name to a specific IP, when the IP are known and stable.. In our case, the name yourdomain.com to point to the server 184.108.40.206
The CNAME record points a name to another name, instead of an IP. The CNAME source represents an alias for the target name and inherits its entire resolution chain. In our case, we use Github Pages and we set www as a CNAME of astephannie.github.io
In this step, we are setting two “A Record DNS”, it’s necessary because we are getting the communication between Cloudflare and Github Pages. From now, all requests to yourdomain.com will be routed to the static website on Github. Click on “continue” button to go to the next step.
Step 4: Select the Cloudflare plan
Select the Cloudflare plan you want. In our case, we select “Free Website” and click on the “continue” button.
Step 5: Update the Nameservers on your domain dashboard
Copy the Nameservers from Cloudflare and paste them on your Domain dashboard.
For this example, we have a domain of godaddy.com. We need to access to our domain and change the Nameservers.
You will have this result in pending status at the beginning:
At the end, the status will change:
Step 6: Setup Minification of the website assets
There are some other options to consider in Cloudflare to setup. We did the necessary steps to have our website live on the domain we setup. You can check caching and page rules to continue exploring the options of Cloudflare and verify how powerful can Cloudflare be.
Step 7: You’re done!
Your website is live. You can make changes directly in Github on the gh-pages branch, and they’ll appear directly on your website.
Debugging i found that Edge’s render process ( MicrosoftEdgeCP.exe ), I tried and Failed to load the a manual mapped/injected DLL, I did a trace back to kernel32!LoadLibraryA and found it returned ERROR_INVALID_IMAGE_HASH directly, tracking LdrLoadDll/NtCreateSection further down the kernel and noticing new security features for three DLL loads in Windows 10 TH2:
1. For a specific process, the prohibition of loading unsigned DLL (SignatureMitigationOptIn)
2. For a specific process, prohibit loading remote DLL (ProhibitRemoteImageMap)
3. For a specific process, the integrity of the file is not allowed to load the low level of the image file (ProhibitLowILImageMap)
All three of these features are included in the Mitigation Policy. This is how Microsoft stores progress and global mitigation from Windows 8. You can query and set the process through the public API SetProcessMitigationPolicy / GetProcessMitigationPolicy (actually NtQuery / SetInformationProcess-> ProcessMitigationPolicy) Google Chrome also uses these two APIs to enhance security. The Mitigation Policy can also be set by setting the Attributes List in StartupInfoEx with IEFO, inheriting the parent process, and creating the process.
According to Microsoft official documentation only available for Windows 8 and Windows 10 GetProcessMitigationPolicy and SetProcessMitigationPolicy (for) https://msdn.microsoft.com/en-us/library/windows/desktop/hh769085(v=vs.85).aspx
Windows 10 TH2. : all data structure and Mitigation Policy
The three security features mentioned here are MitigationOptIn from ProcessSignaturePolicy and NoRemoteImages & NoLowMandatoryLabelImages from this TH2 new ProcessImageLoadPolicy.
All three of these options, once set, whether specified through the in run-time or inherited or created process options, can not be shut down again, and because the policy is implemented in the kernel, none of the three features can be turned off even with code execution permission.
Ubuntu on Windows, one of the hot issues in Build 2016, can now be seen in the Insider Preview.
I am looking around lightly after installation.
First of all, check the /usr/bin path through bash on Ubuntu on Windows and cmd on Native Windows.
And I ran the vi editor in the background.
Can you see the init, bash, and vi in the left mouse explorer?
You can not see the initial, bash, or vi, in the Process Manager’s Process List, except for bars.f bash shell to execute the initial bash shell.
(Note that Bash.exe and Bash are operating as others.)
In addition, you can not see the process image path of the process except Bash.exe in Process Explorer!
If you actually check at the kernel level, all of them have a process object, but all of the information, such as process image path, name, etc, is empty.
What this suggests is that when someone running on Ubuntu on Windows is performing certain acts (process execution, file modifications, network communications), it’s hard to specifically define the behavior of a particular person.
Even the people of Ubuntu on Windows, except for the initial bars.exe, do not have prefetch.
The following screenshot shows the process monitor when creating a file in the C :\Windows\System32\ path from the Bash shell.
If Windows 10 Redstone is officially released this summer, it will be very annoying for forensics or security programs!
I’ve been looking at Ubuntu on a little bit more about Ubuntu on the next day.
Today I’m going to talk about a window account and a little bit about the file system on Ubuntu on Ubuntu.
The reason for this is that because Ubuntu on Windows is not a typical virtualization concept, it can happen.
First, where is the file created in Ubuntu on the Windows environment stored in the actual Windows host file system?
(Actually, I should have told you this yesterday, but I forgot to mention it.)
First of all, in the context of Ubuntu on Windows, the environment on the Linux subsystem is independent of Windows accounts.
In other words, a window user named A has root account on Ubuntu on Ubuntu, and Windows users on the same machine have root accounts with Ubuntu on Ubuntu on the same machine.
In this situation, the root user’s account and the space used by the user’s root account are located in a different path on the actual host.
Even if the user installed the Bash shell, the Bash download and installation of the Bash are being downloaded to enable the user to use the Bash shell.
This Ubuntu on Windows will have the default path to the %LOCALAPPDATA% lxss path on the host file system.
Here is the core component of Ubuntu, and we’ll talk about it later.
Here are the steps to access the files on the ‘ Hover ‘ account in the machine file system that are stored on the host file system and stored in the host file system via the / mnt directory on the machine.
Can you see it?
At first, the ” hopper ” file is approaching the ” tester ” !!! You can even modify it !!!
To do this, you need the privilege ” tester ” to access the user directory of the ” hopper “, but by default, you can do so by running the Bash.
How did it feel? Doesn’t it sound like anything independent? Haha
Now … Now there’s another part of the security solution that needs to be addressed.
For example, we have implemented ” self-protection ” that prevents us from tampering with our files without permission.
Identify the processes that are accessing our files and block them from unauthorized processes.
But do you remember that the guys who ran on the bash in yesterday’s shoes exist to have process objects, but the process image path and image file name information are empty?
Because if you ask us to ” write ” the ” image path ” of the ” process path ” that accesses our own protection files, we won’t be able to identify what they’ve done in Ubuntu on Windows. There’s no path information! Hahaha
So what should we do? Of course there’s a way. I’ll talk about it next time.
As mentioned, the process name of the Linux subsystem can not be imported based on the ‘ Process Image Path ‘ information on the kernel process object.
This means that you can not verify the process name in the ‘ Task Manager ‘, which is written by the ‘ Process Image Path ‘ information.
So what should we do?
Fortunately there are many ways.
There are two ways of using them already, and I’m going to give you a couple of ways to use them interchangeably in both kernel-mode mode.
(The rest of the way is later ……
It’s all you know!
The method is called … NtQuerySystemInformation() API call.
It’s too easy, right? Hahaha
If you specify the NtQuerySystemInformation() API call, you can see the SystemProcessInformation class for the process information.
You can view the process name by viewing the ImageName information while traveling.
Here are the programs you’ve created in the environment that let you run the programs you created through Bash.
The names of the initiatives appearing in the left-side process Explorer, the bash, and the color processing process are also well printed in the programs that I wrote in the upper-right corner.
On the other hand, you can see that the Task Manager is not in the position where the init process should be located. (The other two guys can’t see them in the Task Manager.)
Now … Well, now you’re wondering.
How do theNtQuerySystemInformation() API get the name of the process?
The answer is in the Linux subsystem kernel implementation.
I think it’s time to talk about Ubuntu on the Linux on the Linux subsystem.