Implementing a validation library isn’t all that hard. Neither is adding all of those extra features that make your validation library much better than the rest.
This article will continue implementing the validation library we started implementing in the previous part of this article series. These are the features that are going to take us from a simple proof of concept to an actual usable library!
Since we’re validating on all change events, we’re showing the user error messages way too early for a good user experience. There are a few ways we can mitigate this.
The first solution is simply providing the submitted flag as a returned property of the useValidation hook. This way, we can check whether or not the form is submitted before showing an error message. The downside here is that our “show error code” gets a bit longer:
Another approach is to provide a second set of errors (let’s call them submittedErrors), which is an empty object if submitted is false, and the errors object if it’s true. We can implement it like this:
const useValidation = config =>
// as before
submittedErrors: state.submitted ? state.errors : ,
This way, we can simply destructure out the type of errors that we want to show. We could, of course, do this at the call site as well — but by providing it here, we’re implementing it once instead of inside all consumers.
A lot of people want to be shown an error once they leave a certain field. We can add support for this, by tracking which fields have been “blurred” (navigated away from), and returning an object blurredErrors, similar to the submittedErrors above.
The implementation requires us to handle a new action type — blur, which will be updating a new state object called blurred:
const initialState =
function validationReducer(state, action)
// as before
const blurred =
return ...state, blurred ;
throw new Error('Unknown action type');
When we dispatch the blur action, we create a new property in the blurred state object with the field name as a key, indicating that that field has been blurred.
The next step is adding an onBlur prop to our getFieldProps function, that dispatches this action when applicable:
getFieldProps: fieldName => (
// as before
onBlur: () =>
dispatch( type: 'blur', payload: fieldName );
Finally, we need to provide the blurredErrors from our useValidation hook so that we can show the errors only when needed.
const blurredErrors = useMemo(() =>
const returnValue = ;
for (let fieldName in state.errors)
returnValue[fieldName] = state.blurred[fieldName]
, [state.errors, state.blurred]);
// as before
Here, we create a memoized function that figures out which errors to show based on whether or not the field has been blurred. We recalculate this set of errors whenever the errors or blurred objects change. You can read more about the useMemo hook in the documentation.
Our useValidation component is now returning three sets of errors — most of which will look the same at some point in time. Instead of going down this route, we’re going to let the user specify in the config when they want the errors in their form to show up.
Our new option — showErrors — will accept either “submit” (the default), “always” or “blur”. We can add more options later, if we need to.
function getErrors(state, config)
if (config.showErrors === 'always')
if (config.showErrors === 'blur')
.filter(([, blurred]) => blurred)
.reduce((acc, [name]) => ( ...acc, [name]: state.errors[name] ), );
return state.submitted ? state.errors : ;
const useValidation = config =>
// as before
const errors = useMemo(
() => getErrors(state, config),
// as before
Since the error handling code started to take most of our space, we’re refactoring it out into its own function. If you don’t follow the Object.entries and .reduce stuff — that’s fine — it’s a rewrite of the for...in code in the last section.
If we required onBlur or instant validation, we could specify the showError prop in our useValidation configuration object.
const config =
// as before
const getFormProps, getFieldProps, errors = useValidation(config);
// errors would now only include the ones that have been blurred
“Note that I’m now assuming that each form will want to show errors the same way (always on submit, always on blur, etc). That might be true for most applications, but probably not for all. Being aware of your assumptions is a huge part of creating your API.”
Allow For Cross-Validation
A really powerful feature of a validation library is to allow for cross-validation — that is, to base one field’s validation on another field’s value.
To allow this, we need to make our custom hook accept a function instead of an object. This function will be called with the current field values. Implementing it is actually only three lines of code!
const [state, dispatch] = useReducer(...);
if (typeof config === 'function')
config = config(state.values);
To use this feature, we can simply pass a function that returns the configuration object to useValidation:
const getFieldProps = useValidation(fields => (
isRequired: message: 'Please fill out the password' ,
isRequired: message: 'Please fill out the password one more time' ,
isEqual: value: fields.password, message: 'Your passwords don’t match'
Here, we use the value of fields.password to make sure two password fields contain the same input (which is terrible user experience, but that’s for another blog post).
A neat thing to do when you’re in charge of the props of a field is to add the correct aria-tags by default. This will help screen readers with explaining your form.
A very simple improvement is to add aria-invalid="true" if the field has an error. Let’s implement that:
const useValidation = config =>
// as before
// as before
getFieldProps: fieldName => (
// as before
That’s one added line of code, and a much better user experience for screen reader users.
You might wonder about why we write String(!!state.errors[fieldName])? state.errors[fieldName] is a string, and the double negation operator gives us a boolean (and not just a truthy or falsy value). However, the aria-invalid property should be a string (it can also read “grammar” or “spelling”, in addition to “true” or “false”), so we need to coerce that boolean into its string equivalent.
There are still a few more tweaks we could do to improve accessibility, but this seems like a fair start.
Shorthand Validation Message Syntax
Most of the validators in the calidators package (and most other validators, I assume) only require an error message. Wouldn’t it be nice if we could just pass that string instead of an object with a message property containing that string?
Let’s implement that in our validateField function:
function validateField(fieldValue = '', fieldConfig, allFieldValues)
for (let validatorName in fieldConfig)
let validatorConfig = fieldConfig[validatorName];
if (typeof validatorConfig === ’string')
validatorConfig = message: validatorConfig ;
const configuredValidator = validators[validatorName](validatorConfig);
const errorMessage = configuredValidator(fieldValue);
This way, we can rewrite our validation config like so:
const config =
isRequired: 'The username is required',
isEmail: 'The username should be a valid email address',
Initial Field Values
Sometimes, we need to validate a form that’s already filled out. Our custom hook doesn’t support that yet — so let’s get to it!
Initial field values will be specified in the config for each field, in the property initialValue. If it’s not specified, it defaults to an empty string.
We’re going to create a function getInitialState, which will create the initial state of our reducer for us.
We go through all fields, check if they have an initialValue property, and set the initial value accordingly. Then we run those initial values through the validators and calculate the initial errors as well. We return the initial state object, which can then be passed to our useReducer hook.
Since we’re introducing a non-validator prop into the fields config, we need to skip it when we validate our fields. To do that, we change our validateField function:
function validateField(fieldValue = '', fieldConfig)
const specialProps = ['initialValue'];
for (let validatorName in fieldConfig)
// as before
As we keep on adding more features like this, we can add them to our specialProps array.
Outsourcing is a significant breakthrough, which started in the late 90’s in North America and Europe. Different aspects and processes in businesses were outsourced, including documents from public, classified, and restricted archives. Without a doubt, outsourcing is an efficient method that enables businesses to focus on core processes.
Why It Matters
The rise of document management outsourcing isn’t that surprising, considering that most companies view their document management solution as an inconvenient part of their functional operations. Companies are aware that it cause an unnecessary delay from time to time. Aside from delays, document management also results in an even more disorganized process.
These inconveniences can lead to errors and redundancies that could easily be avoided by consulting experienced document management experts. These people can provide a holistic solution and approach to your enterprise content management, which plays a fundamental role in the control of the information in your company. The effectiveness of your document management system can have a negative or positive impact on your business processes.
After determining the importance of document management, it is natural for us to know more about what is it and how you can make the most out of its benefits when the process is done as it should be.
What Exactly Is Document Management?
Document management is often referred to as Document Management Systems (DMS). This process uses document management software to store, manage, and track electronic documents. In other cases, it is used on images of paper-based information captured through the use of a document scanner, as well as other file types. The system comprehensively shows how your organization stores, manages, and tracks its electronic copy of documents that is applicable in modern or paperless settings.
Document management is one of the precursor advancements to content management. But before it became mainstream, document management was available exclusively in a stand-alone basis like imaging, workflow, and archiving brethren. In the first few years of the document management system, it’s not considered as necessary as other functions. It all changed when businesses discovered the benefits document management system offers,especially when a reputable outsourcing company is hired to do the job.
To date, outsourcing firms provide some of the most basic functionalities to content management such as imposing controls and management capabilities onto otherwise “stagnant” documents. In addition, they now have the capability to create software or programs that can handle more complicated procedures in the document management process. This is extremely convenient at times when you have piles of documents and you have to look for a specific file immediately.
Top Benefits of Outsourcing Document Management
Outsourcing document management is an effective way to help business owners so they can divert their attention and full potential in more essential divisions of the company.
Most, if not all, outsourcing companies can provide the equivalent level of effectiveness in services at a much lower cost. This aspect of outsourcing document management is of vital importance because business and company owners are looking for means to save as much as possible. But, of course, companies need not bargain on the quality of results. Therefore, outsourcing document management is a high common ground that meets both budget and quality of results. You may found very useful information provided by John Mancini here about outsourcing document management.
It’s a cost-effective strategy that allows small to large businesses to save and make use of their budgets in more critical areas of the company.
In general, outsourcing is a great way to save more bucks since you can use the budget in other means that you’d find more profitable. For instance, instead of investing in a few sets of computer or devices, you can use the extra money for office improvement. Subsequently,the outsourcing company will be the one responsible for the machines that the outsourced employees will use. Simple changes like this yield considerable improvements in budget allocation.
It eliminates the risk and burden of launching solution.
When you do choose to outsource, you can cut off the time spent in brainstorming to find the best solutions for your company’s requirements. The document management outsourcing firm can evaluate the demands of your business and find suitable strategies that tailor fits your needs.
Outsourcing helps save space or storage.
Having document management done by a dedicated team is an excellent way to keep office space and room. The documents can either be collected in cloud storage (soft files) or file cabinets if the company prefers printed copies. It’s also a good option if you want to make your office look more ergonomic and organized. The CIO will give you some valuable information about the benefits of outsourcing.
Outsourcing document management requires a little to less employee involvement.
Another great benefit of utilizing outsourcing for document management is the fact that you can cut off the number of employees who will be handling the documents. Also, you don’t need to hire more people to manage the records as the outsourcing company will do it for you.
Get your document management handled by the experts.
Most outsourcing firms are well-trained in providing efficient methods of document management. Thus, there is a higher chance that you’d get the results your company deserves. Moreover, these outsourcing firms choose document managers who are highly skilled and professional in handling different types of files, using dedicated software suitable for your business functions and the industry you’re in. I found this article very useful on Amplexor about document management from the experts view.
Outsourced documents pass industry compliance.
Since experts handle the materials involved, you are ensured that each file passes the industry compliance. Most outsourcing firms offering the service require quality assurance checks to determine if any material needs revision depending on the industry standards.
It’s an efficient way to create reports.
Organizing, validating, and disseminating the reports is another time-consuming factor that causes a delay in the performance of other vital tasks. These tasks normally takes a great deal of time.This is why it’s highly beneficial to use outsourcing firms that offer efficient reports daily, weekly, and quarterly. I think every business platform needs outsourcing firm. Please visit this page of Nytimes to get some ideas of its benefits.
It’s a useful method for maintaining document copies.
It’s true that outsourcing document management can save space and storage in the office. But it doesn’t mean that the document managers will get rid of old files to give room to the new one. In fact, the outsourcing firm will find ways to efficiently save and maintain copies of documents, whether they are created recently or years ago.
An efficient strategy to improve the workflow of the business.
Overall, outsourcing business processes is a great way to enhance the workflow. It makes the tasks easier and the results better. It is suitable for almost all types of businesses across different industries, making it a flexible technique for successful management of the company.
Outsourcing document management is a scalable process.
Document management services can be easily scaled up or down based on your company’s demands and specifications. A professional outsourcing bureau offering this service usually provides a complementary product development team to meet all your company’s future needs. I have this video for you to learn how it works.
Expert management of documents requires shorter turn around.
What makes outsourcing document management even more beneficial is the fact that it requires a quicker turnaround time. Expert document managers can handle more files in a shorter period. In addition to that,
Outsourcing document management provides maximum security and privacy.
Outsourcing document management is not well received in the first few years of its launch. It is due to privacy and security concerns that most business owners have. As time goes by, business owners learned to trust the system because of the advancements in features and security settings. Using a document management system has now become more secure and packed with state-of-the-art features that keep documents in a safe place.
Get accuracy in results.
Utilizing document management services yields excellent and accurate results. Companies offering document management services use techniques and tools that generate accurate reports. These reports are used to analyze the data and determine which files or documents were processed.
Software automation at its finest
Software automation is utilized in document management by outsourcing firms. A large portion of the process relies on a document management software. Every company offering document management services use different software and automation process, depending on the needs of the business or client. This video contains informative ideas for the beginners.
In a modern business setting, there are growing volumes of documents, especially if the company has been around for quite a long time. Thus, it requires commitment and hard work to manage these files accordingly. From reducing the risk of losses to improving employee efficiency in meeting deadlines, there are great benefits that should convince you to outsource document management to a professional record management provider. They can address various company priorities as necessary.
Choosing where to focus your company’s energy and resources is a crucial step to reaching progress. Thus, it’s vital to select a company that you can trust when outsourcing document management for your business. With all of the business transactions happening, it is so easy to feel swarmed and overwhelmed, especially if you are short with manpower. However, if you’re able to pick a trusted firm to handle these, your business processes could go smoothly and more systematic than ever.
Today Android is celebrating two amazing milestones. It’s Android’s version 10! And today, Android is running on more than 2.5B active Android devices.
With Android Q, we’ve focused on three themes: innovation, security and privacy, and digital wellbeing. We want to help you take advantage of the latest new technology — 5G, foldables, edge-to-edge screens, on-device AI, and more — while making sure users’ security, privacy, and wellbeing are always a top priority.
Earlier at Google I/O we highlighted what’s new in Android Q and unveiled the latest update, Android Q Beta 3. Your feedback continues to be extremely valuable in shaping today’s update as well as our final release to the ecosystem in the fall.
This year, Android Q Beta 3 is available on 15 partner devices from 12 OEMs — that’s twice as many devices as last year! It’s all thanks to Project Treble and especially to our partners who are committed to accelerating updates to Android users globally — Huawei, Xiaomi, Nokia, Sony, Vivo, OPPO, OnePlus, ASUS, LGE, TECNO, Essential, and realme.
Visit android.com/beta to see the full list of Beta devices and learn how to get today’s update on your device. If you have a Pixel device, you can enroll here to get Beta 3 — if you’re already enrolled, watch for the update coming soon. To get started developing with Android Q Beta, visit developer.android.com/preview.
Privacy and security
As we talked about at Google I/O, privacy and security are important to our whole company and in Android Q we’ve added many more protections for users.
In Android Q, privacy has been a central focus, from strengthening protections in the platform to designing new features with privacy in mind. It’s more important than ever to give users control — and transparency — over how information is collected and used by apps, and by our phones.
Building on our work in previous releases, Android Q includes extensive changes across the platform to improve privacy and give users control — from improved system UI to stricter permissions to restrictions on what data apps can use.
For example, Android Q gives users more control over when apps can get location. Apps still ask the user for permission, but now in Android Q the user has greater choice over when to allow access to location — such as only while the app is in use, all the time, or never. Read the developer guide for details on how to adapt your app for the new location controls.
Outside of location, we also introduced the Scoped Storage feature to give users control over files and prevent apps from accessing sensitive user or app data. Your feedback has helped us refine this feature, and we recently announced several changes to make it easier to support. These are now available in Beta 3.
Another important change is restricting app launches from the background, which prevents apps from unexpectedly jumping into the foreground and taking over focus. In Beta 3 we’re transitioning from toast warnings to actually blocking these launches.
To keep users secure, we’ve extended our BiometricPrompt authentication framework to support biometrics at a system level. We’re extending support for passive authentication methods such as face, and we’ve added implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction. The new implicit flow is designed for a lighter-weight alternative for transactions with passive authentication, and there’s no need for users to explicitly confirm.
Android Q also adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections made through Android’s TLS stack, called Conscrypt, regardless of target API level. See the docs for details.
Today we also announced Project Mainline, a new approach to keeping Android users secure and their devices up-to-date with important code changes, direct from Google Play. With Project Mainline, we’re now able to update specific internal components within the OS itself, without requiring a full system update from your device manufacturer. This means we can help keep the OS code on devices fresher, drive a new level of consistency, and bring the latest AOSP code to users faster — and for a longer period of time.
We plan to update Project Mainline modules in much the same way as app updates are delivered today — downloading the latest versions from Google Play in the background and loading them the next time the phone starts up. The source code for the modules will continue to live in the Android Open Source Project, and updates will be fully open-sourced as they are released. Also, because they’re open source, they’ll include improvements and bug fixes contributed by our many partners and developer community worldwide.
For users, the benefits are huge, since their devices will always be running the latest versions of the modules, including the latest updates for security, privacy, and consistency. For device makers, carriers, and enterprises, the benefits are also huge, since they can optimize and secure key parts of the OS without the cost of a full system update.
For app and game developers, we expect Project Mainline to help drive consistency of platform implementation in key areas across devices, over time bringing greater uniformity that will reduce development and testing costs and help to make sure your apps work as expected. All devices running Android Q or later will be able to get Project Mainline, and we’re working closely with our partners to make sure their devices are ready.
Innovation and new experiences
Android is shaping the leading edge of innovation. With our ecosystem partners, we’re enabling new experiences through a combination of hardware and software advances.
This year, display technology will take a big leap with foldable devices coming to the Android ecosystem from several top device makers. When folded these devices work like a phone, then you unfold a beautiful tablet-sized screen.
We’ve optimized Android Q to ensure that screen continuity is seamless in these transitions, and apps and games can pick up right where they left off. For multitasking, we’ve made some changes to onResume and onPause to support multi-resume and notify your app when it has focus. We’ve also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on large screens.
Our partners have already started showing their innovative foldable devices, with more to come. You can get started building and testing today with our foldables emulator in canary release of Android Studio 3.5.
5G networks are the next evolution of wireless technology — providing consistently faster speeds and lower latency. For developers, 5G can unlock new kinds of experiences in your apps and supercharge existing ones.
Android Q adds platform support for 5G and extends existing APIs to help you transform your apps for 5G. You can use connectivity APIs to detect if the device has a high bandwidth connection and check whether the connection is metered. With these your apps and games can tailor rich, immersive experiences to users over 5G.
With Android’s open ecosystem and range of partners, we expect the Android ecosystem to scale to support 5G quickly. This year, over a dozen Android device makers are launching 5G-ready devices, and more than 20 carriers will launch 5G networks around the world, with some already broad-scale.
On top of hardware innovation, we’re continuing to see Android’s AI transforming the OS itself to make it smarter and easier to use, for a wider range of people. A great example is Live Caption, a new feature in Android Q that automatically captions media playing on your phone.
Many people watch videos with captions on — the captions help them keep up, even when on the go or in a crowded place. But for 466 million Deaf and Hard of Hearing people around the world, captions are more than a convenience — they make content accessible. We worked with the Deaf community to develop Live Caption.
Live Caption brings real-time captions to media on your phone – videos, podcasts, and audio messages, across any app—even stuff you record yourself. Best of all, it doesn’t even require a network connection — everything happens on the device, thanks to a breakthrough in speech recognition that we made earlier this year. The live speech models run right on the phone, and no audio stream ever leaves your device.
For developers, Live Caption expands the audience for your apps and games by making digital media more accessible with a single tap. Live Caption will be available later this year.
Suggested actions in notifications
In Android Pie we introduced smart replies for notifications that let users engage with your apps direct from notifications. We provided the APIs to attach replies and actions, but you needed to build those on your own.
Now in Android Q we want to make smart replies available to all apps right now, without you needing to do anything. Starting in Beta 3, we’re enabling system-provided smart replies and actions that are inserted directly into notifications by default.
Android Q suggestions are powered by an on-device ML service built into the platform — the same service that backs our text classifier entity recognition service. We’ve built it with user privacy in mind, and the ML processing happens completely on the device, not on a backend server.
Because suggested actions are based on the TextClassifier service, they can take advantage of new capabilities we’ve added in Android Q, such as language detection. You can also use TextClassifier APIs directly to generate system-provided notifications and actions, and you can mix those with your own replies and actions as needed.
Many users prefer apps that offer a UI with a dark theme they can switch to when light is low, to reduce eye strain and save battery. Users have also asked for a simple way to enable dark theme everywhere across their devices. Dark theme has been a popular request for a while, and in Android Q, it’s finally here.
Starting in Android Q Beta 3, users can activate a new system-wide dark theme by going to Settings > Display, using the new Quick Settings tile, or turning on Battery Saver. This changes the system UI to dark, and enables the dark theme of apps that support it. Apps can build their own dark themes, or they can opt-in to a new Force Dark feature that lets the OS create a dark version of their existing theme. All you have to do is opt-in by setting android:forceDarkAllowed="true" in your app’s current theme.
You may also want to take complete control over your app’s dark styling, which is why we’ve also been hard at work improving AppCompat’s DayNight feature. By using DayNight, apps can offer a dark theme to all of their users, regardless of what version of Android they’re using on their devices. For more information, see here.
Many of the latest Android devices feature beautiful edge-to-edge screens, and users want to take advantage of every bit of them. In Android Q we’re introducing a new fully gestural navigation mode that eliminates the navigation bar area and allows apps and games to use the full screen to deliver their content. It retains the familiar Back, Home, and recents navigation through edge swipes rather than visible buttons.
Users can switch to gestures in Settings > System > Gestures. There are currently two gestures: Swiping up from the bottom of the screen takes the user to the Home screen, holding brings up Recents. Swiping from the screen’s left or right edge triggers the Back action.
To blend seamlessly with gestural navigation, apps should go edge-to-edge, drawing behind the navigation bar to create an immersive experience. To implement this, apps should use the setSystemUiVisibility() API to be laid out fullscreen, and then handle WindowInsets as appropriate to ensure that important pieces of UI are not obscured. More information is here.
Digital wellbeing is another theme of our work on Android — we want to give users the visibility and tools to find balance with the way they use their phones. Last year we launched Digital Wellbeing with Dashboards, App Timers, Flip to Shush, and Wind Down mode. These tools are really helping. App timers helped users stick to their goals over 90% of the time, and users of Wind Down had a 27% drop in nightly usage.
This year we’re continuing to expand our features to help people find balance with digital devices, adding Focus Mode and Family Link.
Focus Mode is designed for all those times you’re working or studying, and you want to to focus to get something done. With focus mode, you can pick the apps that you think might distract you and silence them – for example, pausing email and the News while leaving maps and text message apps active. You can then use Quick Tiles to turn on Focus Mode any time you want to focus. Under the covers, these apps will be paused – until you come out of Focus Mode! Focus Mode is coming to Android 9 Pie and Android Q devices this Fall.
Family Link is a new set of controls to help parents. Starting in Android Q, Family Link will be built right into the Settings on the device. When you set up a new device for your child, Family Link will help you connect it to you. You’ll be able to set daily screen time limits, see the apps where your child is spending time, review any new apps your child wants to install, and even set a device bedtime so your child can disconnect and get to sleep. And now in Android Q you can also set time limits on specific apps… as well as give your kids Bonus Time if you want them to have just 5 more minutes at bedtime. Family Link is coming to Android P and Q devices this Fall. Make sure to check out the other great wellbeing apps in the recent Google Play awards.
Family link lets parents set device bedtime and even give bonus minutes.
We’re continuing to extend the foundations of Android with more capabilities to help you build new experiences for your users — here are just a few.
Improved peer-to-peer and internet connectivity
In Android Q we’ve refactored the Wi-Fi stack to improve privacy and performance, and also to improve common use-cases like managing IoT devices and suggesting internet connections — without requiring the location permission. The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Thenetwork suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity.
Wi-Fi performance modes
In Android Q apps can now request adaptive Wi-Fi by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases. The platform works with the device firmware to meet the requirement with the lowest power consumption. To use the new performance modes, call WifiManager.WifiLock.createWifiLock().
Full support for Wi-Fi RTT accurate indoor positioning
In Android 9 Pie we introduced RTT APIs for indoor positioning to accurately measure distance to nearby Wi-Fi Access Points (APs) that support the IEEE 802.11mc protocol, based on measuring the round-trip time of Wi-Fi packets. Now in Android Q, we’ve completed our implementation of the 802.11mc standard, adding an API to obtain location information of each AP being ranged, configured by their owner during installation.
Audio playback capture
You saw how Live Caption can take audio from any app and instantly turn it into on-screen captions. It’s a seamless experience that shows how powerful it can be for one app to share its audio stream with another. In Android Q, any app that plays audio can let other apps capture its audio stream using a new API. In addition to enabling captioning and subtitles, the API lets you support popular use-cases like live-streaming games, all without latency impact on the source app or game.
We’ve designed this new capability with privacy and copyright protection in mind, so the ability for an app to capture another app’s audio is constrained, giving apps full control over whether their audio streams can be captured. Read more here.
Dynamic depth for photos
Apps can now request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support. Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases. Dynamic Depth is an open format for the ecosystem — the latest version of the spec is here. We’re working with our device-maker partners to make it available across devices running Android Q and later.
With Dynamic Depth image you can offer specialized blurs and bokeh options in your app
New audio and video codecs
Android Q adds support for the open source video codec AV1, which allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus – a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it. The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates.
Vulkan 1.1 and ANGLE
We’re continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. We’re working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. For game and graphics developers using OpenGL, we’re also working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we’re adding experimental support for ANGLE on top of Vulkan on Android devices. See the docs for details.
Neural Networks API 1.2
In NNAPI 1.2 we’ve added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models — such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.
When devices get too warm, they may throttle the CPU and/or GPU, and this can affect apps and games in unexpected ways. Now in Android Q, apps and games can use a thermal API to monitor changes on the device and take action to help restore normal temperature. For example, streaming apps can reduce resolution/bit rate or network traffic, a camera app could disable flash or intensive image enhancement, or a game could reduce frame rate or polygon tesselation. Read more here.
Android Q introduces several improvements to the ART runtime to help your apps start faster, consume less memory, and run smoother — without requiring any work from you. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it’s run. Cloud-based profiles benefit all apps and they’re already available to devices running Android P and higher.
We’re also adding Generational Garbage Collection to ART’s Concurrent Copying (CC) Garbage Collector. Generational CC collects young-generation objects separately, incurring much lower cost as compared to full-heap GC. It makes garbage collection more efficient in terms of time and CPU, reduces jank, and helps apps run better on lower-end devices.
More Android Q Beta devices, more Treble momentum than ever
In 2017 we launched Project Treble as part of Android Oreo, with a goal of accelerating OS updates. Treble provides a consistent, testable interface between Android and the underlying device code from device makers and silicon manufacturers, which makes porting a new OS version much simpler and more modular.
In 2018 we worked closely with our partners to bring the first OS updates to their Treble devices. The result: last year at Google I/O we had 8 devices from 7 partners joining our Android P Beta program, together with our Pixel and Pixel 2 devices. Fast forward to today — we’re seeing updates to Android Pie accelerating strongly, with 2.5 times the footprint compared to Android Oreo’s at the same time last year.
This year with Android Q we’re seeing even more momentum, and we have 23 devices from 13 top global device makers releasing Android Q Beta 3 — including all Pixel devices. We’re also providing Q Beta 3 Generic System Images (GSI), a testing environment for other supported Treble devices. All of these offer the same behaviors, APIs, and features — giving you an incredible variety of devices for testing your apps, and more ways for you to get an early look at Android Q.
To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.
How do I get Beta 3?
It’s easy! Just enroll any Pixel device here to get the update over-the-air. If you’re already enrolled, you’ll receive the update soon, and, no action is needed on your part. Downloadable system images are also available.
You can also get Beta 3 on any of the other devices participating in the Android Q Beta program, from some of our top device maker partners. You can see the full list of supported partner and Pixel devices at android.com/beta. For each device you’ll find specs and links to the manufacturer’s dedicated site for downloads, support, and to report issues.
For even broader testing on supported devices, you can also get Android GSI images, and if you don’t have a device you can test on the Android Emulator — just download the latest emulator system images via the SDK Manager in Android Studio.
OK, that’s a little clickbaity but it’s surely impressed the heck out of me. You can read more about VS Code Remote Development (at the time of this writing, available in the VS Code Insiders builds) but here’s a little on my first experience with it.
Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. It effectively splits VS Code in half and runs the client part on your machine and the “VS Code Server” basically anywhere else. The Remote Development extension pack includes three extensions. See the following articles to get started with each of them:
Remote – SSH – Connect to any location by opening folders on a remote machine/VM using SSH.
Remote – Containers – Work with a sandboxed toolchain or container-based application inside (or mounted into) a container.
Remote – WSL – Get a Linux-powered development experience in the Windows Subsystem for Linux.
Lemme give a concrete example. Let’s say I want to do some work in any of these languages, except I don’t have ANY of these languages/SDKS/tools on my machine.
Aside: You might, at this point, have already decided that I’m overreacting and this post is nonsense. Here’s the thing though when it comes to remote development. Hang in there.
On the Windows side, lots of folks creating Windows VMs in someone’s cloud and then they RDP (Remote Desktop) into that machine and push pixels around, letting the VM do all the work while you remote the screen. On the Linux side, lots of folks create Linux VMs or containers and then SSH into them with their favorite terminal, run vim and tmux or whatever, and then they push text around, letting the VM do all the work while you remote the screen. In both these scenarios you’re not really client/server, you’re terminal/server or thin client/server. VS Code is a thick client with clean, clear interfaces to language services that have location transparency.
I type some code, maybe an object instance, then intellisense is invoked with a press of “.” – who does that work? Where does that list come from? If you’re running code locally AND in the container, then you need to make sure both sides are in sync, same SDKs, etc. It’s challenging.
OK, I don’t have the Rust language or toolkit on my machine.
C:github> git clone https://github.com/Microsoft/vscode-remote-try-rust
Cloning into 'vscode-remote-try-rust'...
Unpacking objects: 100% (38/38), done.
C:github> cd .vscode-remote-try-rust
C:githubvscode-remote-try-rust [main =]> code-insiders .
Then VS Code says, hey, this is a Dev Container, want me to open it?
There’s a devcontainer.json file that has a list of extensions that the project needs. And it will install those VS Extensions inside a Development Docker Container and then access them remotely. This isn’t a list of extensions that your LOCAL system needs – you don’t want to sully your system with 100 extensions. You want to have just those extensions that you need for the project you’re working on it. Compartmentalization. You could do development and never install anything on your local machine, but you’re finding a sweet spot that doesn’t involved pushing text or pixels around.
Now look at this screenshot and absorb. It’s setting up a dockerfile, sure, with the development tools you want to use and then it runs docker exec and brings in the VS Code Server!
Check out the Extensions section of VS Code, and check out the lower left corner. That green status bar shows that we’re in a client/server situation. The extensions specific to Rust are installed in the Dev Container and we are using them from VS Code.
When I’m typing and working on my code in this way (by the way it took just minutes to get started) I’ve got a full experience with Intellisense, Debugging, etc.
Here I am doing a live debug session of a Rust app with zero setup other than VS Code Insiders, the Remote Extensions, and Docker (which I already had).
As I mentioned, you can run within WSL, Containers, or over SSH. It’s early days but it’s extraordinarily clean. I’m really looking forward to seeing how far and effortless this style of development can go. There’s so much less yak shaving! It effectively removes the whole setup part of your coding experience and you get right to it.
Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.
Some people have a strong gut feeling for home decor. They exactly know what to buy and where to place it. They can also visualize the colors and they’re never afraid of change.
Then, there are those people who always have their interior designer’s phone number on speed dial. They want to try a lot of things but don’t have the eye for designs. If you are one of those people who always prefer to search Pinterest before making any major renovations, I feel you. I also fall in the second category.
However, home decor has never been a headache for me. I love to experiment. I always research thoroughly before trying anything new and it works well for me. After meeting many interior designers and surfing on the internet, I have compiled some secrets and easy decorating tips.
Read on to find out what those tips are.
When we enter our home sweet home, the first thing we notice is the color. The walls cover a significant part of our house. Whether we want to add energy to a dull room or calm a hectic one, we just need to change the color combination.
Use the 50/150 Rule. For the perfect color, mix one batch of paint 50% lighter than the base color. Keep the other one 150% darker than the base color and create a tint.
If the budget does not allow to paint the whole house, change the color of just one wall. We can decorate the window wall or backside of your bed instead.
No matter how hard our day is, our little nest welcomes us with open arms. As soon as we put our first step on our favorite rug, we suddenly feel comfortable even after a long hectic day.
We must not forget our roofs and floors while thinking about renovations. Nowadays, floor paintings are trending.
Any pastel color, light blue, light green, light pink or yellow may suit the floors. It depends on the color shade we have applied to our walls. We can also buy fancy carpets to make our floors look beautiful.
We often take our roofs for granted while decorating the house. We forget that even ceilings can change the entire look.
It is essential to renovate our roofs with the help of p.o.p, designer cuttings, and hanging lamps. We can also decorate our roofs with some carved woods or beadboard ceiling.
I have seen hanging hammock in a few houses and I find it super cool. There are many ‘how to’ videos available on Youtube. Apart from that, we can try different colors to make our walls look gorgeous. Bright colors always win the race.
Don’t forget to free a small corner for some ‘me time’ at the time of renovation. An intersection near the window is preferable. Put a cozy chair, stool, lamp, and a lot of books in that corner.
Have some light or transparent curtains on the window and enjoy the sunshine every day. This corner can lighten up the entire day and can fill us with all the positive vibes. We can sit there to write our daily journal or listen to our favorite tracks!
We can also hang a wind-chime or dream catcher to decorate the free space.
The kitchen might be an expensive room to renovate. Sometimes, we may feel that a complete renovation is not necessary or practical.
If so, there are many ideas of kitchen makeovers we can consider. We can deep clean the walls, refresh the paint, add lights, replace cabinets or sink, and add new accessories.
To change places of furniture can also make a huge difference. We can also grow some green plants in the kitchen by placing ceramic pots or vases.
We spend 1/3rd of our life in the bedroom. Thus, it is imperative to design it carefully to make it a relaxing place.
Comfort should be our priority when it comes to bedrooms. Start decorating the room with the best bed, good-quality mattress, and comfortable pillows.
Our choice of color is also crucial. Colors reflect our personality. Bright colors like yellow or orange can expand our thinking while light colors like blue or white can calm us down.
So, choose accordingly. Apart from the furniture and colors, bedroom lightings also play a vital role. It helps us to set the right mood and encourage us to start our days positively.
Whether it is your living room, dining room or guest room, each has different characteristics. So, each room should be decorated with attractive colors, matching curtains, and creative art pieces.
Refurbish the old furniture to get a new look with the least budget. If there is no issue with the closed beds, try having a canopy or bunker bed in the kids’ room. Fairy lights and neon lamps are also famous for instant makeovers. I have placed a bench attached to the window which I use to store excess luggage and as a sitting area.
Home is our little world. We can do whatever we want to do according to our convenience.
Last but not the least, let’s discuss bathroom renovations.
It is important to have a spacious and bright bathroom. We generally feel tired of walking into our outdated bathrooms every day. We always dream for a change. So, what we should do is to change all the tiles first.
Replace the old tiles with new ones and change the lighting. Start using white bulbs and avoid the yellow lights if possible. Nowadays, hanging pendants and light bars are stealing the heart of many.
We can also buy some unique bathroom accessories and place them near the bathtub. We can replace our old cloth hangers with a ladder or creative hooks to make it more stylish.
I believe that our home communicates with us every day. Each wall, floor, door or window know us in and out. We feel happy and safe in our home sweet home. So, it is crucial to keep it pretty.
Home decorations are always subjective. The ideas may differ from person to person. These are my set of plans. I hope you can relate to them and use some of these ideas for decent decorations. Have a happy, super positive, and cozy life in your small world.
“I don’t like myself. I’m crazy about myself,” said Mae West.
People tend to be too harsh on themselves, thus subconsciously fostering a negative self-image.
Many of us have that little, nagging voice in our head saying “You’re not good enough!” or “You’re never going to make it.”
These seemingly harmless insecurities slowly erode our self-esteem and confidence which can have a serious impact on the way we perceive things and cope with life’s challenges.
If you want to be happier and live a more satisfied life, cut yourself some slack.
Start being kinder to yourself.
And stop telling yourself these 7 things.
1. I’m So Stupid!
OK, there are times when the things you do turn out to not exactly be a good idea.
An inappropriate remark you made about a colleague?
Spending your last cent on those expensive, fancy shoes?
Making a really dumb business move?
Been there, done that.
You can legitimately ask yourself after any of those incidents “What the heck was I thinking?”
It’s true that certain things you do qualify as stupid, but that doesn’t mean that you are too.
And this tiny difference puts things into perspective and helps you avoid the trap of internalizing the situations in which you act daft.
In other words, you need to understand that acting stupid and being stupid are two completely different things.
This doesn’t absolve you of the blame and responsibility for your actions and give you a free pass to do whatever you please not thinking about the consequences however.
So, whenever you’re compelled to exclaim that you’re stupid for doing this or that, mind your language and say “I did something stupid.”
2. I Hate My Body
This is one of the worst things you can tell yourself.
Social media, TV and print commercials and the fashion industry are constantly raising the bar and setting unrealistic expectations when it comes to beauty standards, and it’s hard not to compare yourself to all those impossibly and yet effortlessly slender and attractive models and celebrities.
Many people are increasingly sensitive to their physical appearance, and at the same time, they’re too judgmental when it comes to their perceived imperfections.
It’s OK to strive to become the best version of yourself, but negative self-talk won’t get you very far.
Obsessing about your weight, nose, or teeth is something that can have some serious consequences on your self-esteem, so instead of standing in front of the mirror analyzing your “flabby stomach”, “big nose”, or “yellow, crooked teeth”, focus on what you like about your looks.
The trick is to take care of your body, train, eat clean, and try to improve what you can, but avoid negative qualifications.
Also, change your point of view and recognize every positive change that you notice as that will motivate you to persist.
3. I Can’t Do It
The thing is that you’re more capable than you realize.
You are most probably just too insecure and afraid of failure. And it’s this crippling fear that paralyzes you mentally and prevents you from trying.
As this belief is deeply rooted in your mind, you need to become aware of it and try to summon all your willpower in order to change it.
One way out of this blind alley is embracing failure. Once you realize that the world won’t end if you try and not succeed, a huge burden will fall off your chest, and it will be much easier to apply for that job you used to think was out of your league or ask for a raise.
Another useful thing to do is change your narrative – tell yourself “I can do it!”
At first, you’ll have to fake that sense of self-confidence, but with every seemingly impossible thing you achieve (or even fail to achieve), you’ll be able to dispel that dark cloud of fear, doubt, and insecurity.
4. My Life Sucks
Life isn’t always fair.
And this applies to everybody, not just you, even though Instagram might claim otherwise.
When you look at snaps of all those shining, happy people who don’t seem to have a single care in the world and who spend their days having fun with their equally cool friends, traveling around the globe and dining at Michelin-Star restaurants, you feel as if you’re the biggest loser ever.
Again, focusing on the good things in your life and coming to terms that not everything can be as we’ve planned can help you break that vicious circle of despair and dissatisfaction.
It’s also a good idea to cut down on your social media time and do something that will make you feel better, such as taking a walk or going for a drink with your friends.
5. Nobody Loves Me
Whenever you feel compelled to say this to yourself, remember that it can’t be further from the truth, because there’s always you.
And you love yourself, right?
This is something that most of us say when we’re consumed by self-pity and when we’re feeling down.
But the problem is that if you keep on telling yourself that you’re not worthy of love, you subconsciously start behaving in a manner that prevents you from meeting that someone special.
You stop going out and attending parties, not to mention that you refuse your friends’ attempts to introduce you to new people.
On top of it all, the feeling that you’re in a dark place affects your demeanor which means that you’ll be off Mr/Mrs Right’s radar.
6. I Give Up
We’re all sick and tired of everything, and that’s OK.
But if you keep on repeating these three words to yourself, they will be stuck in your head, and you’ll start believing that it’s the only option you’ve got.
Life is full of hurdles, but it’s what makes it exciting and dynamic.
Whenever you’re on the verge of waving the white flag, take a break and try to remember the times when you felt the same and when you faced a seemingly insurmountable problem.
Ask yourself how things panned out back then and you’ll realize that you managed to overcome numerous roadblocks over the course of your life.
And this one is no different, but overthinking gets the better of you.
What you should do is get some rest and take your mind off the problem for a while.
7. This Can Only Happen to Me!
Or as Adrian Mole succinctly puts it “Just my luck!”
By blaming that invisible, supernatural entity whose main task is to make your life miserable for everything bad that happens in your life, you actually give up control over your destiny and let yourself go with the flow.
Again, that fear of being responsible for a potential failure rears its ugly head and turns you into a passive observer of your own life.
But, what you need to understand is that you’re an agent of change and that you don’t have to take a backseat.
Things you say to yourself cut deeper than other people’s words, and you need to change your tune for the sake of your well-being. Remember, if you don’t have anything nice to say, don’t say anything at all.
Rebecca is a freelance translator passionate about her work, and grateful for the travels it has taken her on. She has recently started writing about some of her experiences at RoughDraft.
Let’s accept it. Not all businesses are serious about their workplace safety.
Some of them leave the safety on the mercy of a few fire extinguishers and a couple of warning signs here and there. Some of them don’t bother if the chemicals are stored near their MCB box or employees go through a wet floor. Unfortunately, they only realize the importance of workplace safety when a serious accident occurs.
Why wait for something unpleasant to happen?
Create a secure and positive work environment by avoiding common safety mistakes in the workplace given below.
Failing to Use a Ladder Properly
According to one report, 500,000 people are treated for ladder-related injuries every year. Even worse, over 400 people lost their lives as they succumbed to those injuries.
Tiny ladder-related mistakes can lead to serious injuries. For example, some use unsteady ladders that can easily slide out while a person is on it. Leaning from the ladders is another mistake that can affect their balance. Using other objects like chairs, stools or scaffolding as a ladder is not safe, either.
Not Getting the Machine Inspected
If you are like most business owners, you are likely to skip an inspection if a machine is working fine.
Well, this can be problematic down the road.
You never know when an underlying fault can lead to damage or harm the operator. Therefore, make sure to get your machinery inspected on time.
Timely and proper inspections make sure that your machines are running correctly and won’t pose risk. With regular inspection, your equipment stays in top shape and won’t cause a halt due to breakdown.
Not Keeping the Facility Clean and Organized
Make sure to keep your facility clean and organized. Otherwise, a huge stack of waste or debris can lead to a fire outbreak if they come into contact with a spark or inflammable material.
Moreover, an unclean work environment is an ideal breeding ground for various bacteria and germs, thereby affecting the health of your workforce. By practicing proper work hygiene, you can prevent the growth of harmful viruses and bacteria, ensuring a safe and healthy work environment.
It goes without saying that having clean premises will also improve your business image.
Inappropriate Storage of Chemicals
Your workers are also prone to risks when they come into contact with harmful chemicals or toxic substance like gasoline, paint, and insulation. Here is how you can minimize the risks associated with them.
Keep the areas ventilated.
Make sure your employees wear protective gears like gloves and masks while using chemicals.
Tell your employees to seek treatment if they experience itchiness or not feel well after using the chemicals.
Store the chemicals away from any equipment and electrical short circuit board.
Read the manufacturer’s instructions on how to store them.
Not Performing a Safety Risk Assessment
Let’s admit it.
We are often so used to our work environment that we forget about our safety. In fact, we take it for granted. We assume that we are aware of the risks as well as the ways to deal with them.
That’s not the right approach.
You need to assess the risk factors present in your workplace. For example, you never know when an overheated system can turn into a hazard or when a slippery floor can injure someone.
Therefore, you should perform a safety risk assessment of your workplace frequently. A health and workplace safety professional can help you with this task. They can assess your workplace for risks and help you deal with them.
No matter how upscale and sophisticated work environment you have, you are not immune to electrical hazards. Electrical hazards are one of the major causes of workplace fatalities, even in developed nations like the US.
Power fluctuations can also affect the equipment in your workplace. A big power surge can damage your machinery, while an unexpected power outage can lead to loss of work.
Here are the tips to avoid electrical hazards taking place at your organizations.
Make sure to power off the device before repairing it.
Update your equipment as old equipment may have frayed wires or worn out segments.
Avoid stuffing outlets with too many equipment and tools. Avoid plugin more than high wattage equipment at a time.
Unplug equipment when not in use to save energy as well as minimize the risk of fire or shock.
Get your electrical cords inspected once a month to make sure that they are not cracked or damaged.
Don’t run the wires through high traffic areas like carpets or doorways.
The repairs and installation should be done by a licensed electrician.
All the equipment should be certified.
Not Wearing Protective Gear
One of the major causes of workplace accidents is not wearing protective gears like gloves and helmets.
Also known as Personal Protective Equipment, protecting gear protects workers against several risks on the job. These hazard risks can be anything from falling debris, wet floors, electrical sparks, and poisonous gases. For example, wearing hard hats provide protection to the workers against head injuries or shocks caused by falling objects.
These protective gears generally include items like eye protection, high visibility clothing, safety footwear, helmets and respiratory protective equipment like a mask.
Many employees don’t bother if they walk on a wet floor. Some don’t realize that the lift is out of order until it halts in the middle. Some may not be able to find the emergency exit door when a fire breaks out.
This is why safety signs are used to help workers identify the risks. They warn the employees about potential dangers. For example, the sign of a wet floor will warn them to avoid the pathway so that they don’t get injured.
Some of the common workplace safety signs are prohibition signs, mandatory signs, warning signs, fire safety signs, danger signs, general information signs, and emergency signs.
These are the workplace safety mistakes you can avoid to keep your employees and infrastructure safe and sound. What do you think? Please drop your opinions to the comment box given below!
When WordPress 5 was released, I was excited about making use of the Gutenberg editor to create custom blocks, as posts on my personal blog had a couple of features I could turn into a block, making it easier to set up my content. It was definitely a cool thing to have, yet it still felt quite bloated.
Around the same time, I started reading more and more about static site generators and the JAMstack (this article by Chris Ferdinandi convinced me). With personal side projects, you can kind of dismiss a wide variety of issues, but as a professional, you have to ensure you output the best quality possible. Performance, security and accessibility become the first things to think about. You can definitely optimize WordPress to be pretty fast, but faster than a static site on a CDN that doesn’t need to query the database nor generate your page every time? Not so easy.
I thought that I could put this into practice with a personal project of mine to learn and then be able to use this for professional projects, and maybe some of you would like to know how, too. In this article, I will go over how I made the transition from WordPress to a specific static site generator named Hugo.
Hugo is built in Go, which is a pretty fast and easy to use language once you get used to the syntax, which I will explain. It all compiles locally so you can preview your site right on your computer. The project will then be saved to a private repository. Additionally, I will walk you through how to host it on Netlify, and save your images on a Git LFS (Large File Storage). Finally, we’ll have a look at how we can set up a content management system to add posts and images (similar to the WordPress backend) with Netlify CMS.
Note that all of this is absolutely free, which is pretty amazing if you ask me (although you’ll have to pay extra if you use up all your LFS storage or if your site traffic is intense). Also, I am writing this from a Bitbucket user point of view, running on a Mac. Some steps might be slightly different but you should be able to follow along, no matter what setup you use.
You’ll need to be somewhat comfortable with HTML, CSS, JS, Git and the command terminal. Having a few notions with templating languages such as Liquid could be useful as well, but we will review Hugo’s templates to get you started. I will, nonetheless, provide as many details as possible!
I know it sounds like a lot, and before I started looking into this, it was for me, too. I will try to make this transition as smooth as possible for you by breaking down the steps. It’s not very difficult to find all the resources, but there was a bit of guesswork involved on my part, going from one documentation to the next.
Note: If you have trouble with some of these, please let me know in the comments and I’ll try to help, but please note this is destined to be applied to a simple, static blog that doesn’t have a dozen widgets or comments (you can set that up later), and not a company site or personal portfolio. You undoubtedly could, though, but for the sake of simplicity, I’ll stick to a simple, static blog.
Before we do anything, let’s create a project folder where everything from our tools to our local repository is going to reside. I’ll call it “WP2Hugo” (feel free to call it anything you want).
This tutorial will make use of a few command line tools such as npm and Git. If you don’t have them already, install those on your machine:
Go to your WordPress admin, and open the Tools menu, Export submenu. You can export what you want from there. I’ll refer to the exported file as YOUR-WP-EXPORT.xml.
You can select exactly what data you want to export from your WordPress blog.
Inside our WP2Hugo folder, I recommend creating a new folder named blog2md in which you’ll place the files from the blog2md tool, as well as your XML export from WordPress (YOUR-WP-EXPORT.xml). Also, create a new folder in there called out where your Markdown posts will go. Then, open up your command terminal, and navigate with the cd command to your newly created “blog2md” folder (or type cd with a space and drag the folder into the terminal).
You can now run the following commands to get your posts:
node index.js w YOUR-WP-EXPORT.xml out
Look into the /WP2Hugo/blog2md/out directory to check whether all of your posts (and potential pages) are there. If so, you might notice there’s something about comments in the documentation: I had a comment-free blog so I didn’t need them to be carried through but Hugo does offer several options for comments. If you had any comments on WordPress, you can export them for later re-implementation with a specialized service like Disqus.
If you’re familiar enough with JS, you can tweak the index.js file to change how your post files will come out by editing the wordpressImport function. You may want to capture the featured image, remove the permalink, change the date format, or set the type (if you have posts and pages). You’ll have to adapt it to your needs, but know that the loop (posts.forEach(function(post) ... )) runs through all the posts from the export, so you can check for the XML content of each post in that loop and customize your Front Matter.
Additionally, if you need to update URLs contained in your posts (in my case, I wanted to make image links relative instead of absolute) or the date formatting, this is a good time to do so, but don’t lose sleep over it. Many text editors offer bulk editing so you can plug in a regular expression and make the changes you want across your files. Also, you can run the blog2md script as many times as needed, as it will overwrite any previously existing files in the output folder.
Once you have your exported Markdown files, your content is ready. The next step is to get your WordPress theme ready to work in Hugo.
2. Preparing Your Blog Design
My blog had a typical layout with a header, a navigation bar, content and sidebar, and a footer — quite simple to set up. Instead of copying pieces of my WordPress theme, I rebuilt it all from scratch to ensure there was no superfluous styles or useless markup. This is a good time to implement new CSS techniques (pssst… Grid is pretty awesome!) and set up a more consistent naming strategy (something like CSS Wizardry’s guidelines). You can do what you want, but remember we’re trying to optimize our blog, so it’s good to review what you had and decide if it’s still worth keeping.
Start by breaking down your blog into parts so you can clearly see what goes where. This will help you structure your markup and your styles. By the way, Hugo has the built-in ability to compile Sass to CSS, so feel free to break up those styles into smaller files as much as you want!
When I say simple, I mean really simple.
Alternatively, you can completely bypass this step for now, and style your blog as you go when your Hugo site is set up. I had the basic markup in place and preferred an iterative approach to styles. It’s also a good way to see what works and what doesn’t.
3. Setting Up A New Repository
Now that that is out of the way, we need to set up a repository. I’m going to assume you will want to create a new repository for this, which is going to be a great opportunity to use Git LFS (Large File System). The reason I advise to do this now is that implementing Git LFS when you already have hundreds of images is not as smooth. I’ve done it, but it was a headache you’re likely to want to avoid. This will also provide other benefits down the road with Netlify.
While I’ll be doing all this via Bitbucket and their proprietary Git GUI, Sourcetree, you can absolutely do this with GitHub and GitLab and their own desktop tools. You can also do it directly in the command terminal, but I like to automate and simplify the process as much as I can, reducing the risk of making silly mistakes.
When you’ve created your new repository on the Git platform of your choice, create an empty folder inside your local project folder (WP2Hugo), e.g. hugorepo, then open up your command terminal or Git GUI tool and initialize your local Git repository; then, link it to the remote repository (you can usually find the exact command to use on the newly created remote repository).
I’d recommend creating a dev (or stage) branch so that your main branch is strictly used for production deployments. It’ll also limit new builds to be generated only when you’re done with a potential series of changes. Creating a branch can be done locally or on your repository’s remote webpage.
GitHub makes it easy to create a branch by clicking the branch switcher and typing a new name. On GitLab, you need to open the “Plus” dropdown to access the option. Bitbucket requires you to open the “Plus” menu on the left to open the slide-out menu and click “Create a branch” in the “Get to work” section.
4. Activating Git LFS (Optional)
Git Large File System is a Git feature that allows you to save large files in a more efficient way, such as Photoshop documents, ZIP archives and, in our case, images. Since images can need versioning but are not exactly code, it makes sense to store them differently from regular text files. The way it works is by storing the image on a remote server, and the file in your repository will be a text file which contains a pointer to that remote resource.
Alas, it’s not an option you just click to enable. You must set up your repository to activate LFS and this requires some work locally. With Git installed, you need to install a Git-LFS extension:
git lfs install
If, like me, that command didn’t work for you, try the Homebrew alternative (for macOS or Linux):
brew install git-lfs
Once that’s done, you’ll have to specify which files to track in your repository. I will host all of the images I uploaded in WordPress’s /upload folder in an identically-named folder on my Hugo setup, except that this folder will be inside a /static folder (which will resolves to the root once compiled). Decide on your folder structure, and track your files inside:
git lfs track "static/uploads/*"
This will track any file inside the /static/uploads folder. You can also use the following:
git lfs track "*.jpg"
This will track any and all JPG files in your repository. You can mix and match to only track JPGs in a certain folder, for example.
With that in place, you can commit your LFS configuration files to your repository and push that to your remote repository. The next time you locally commit a file that matches the LFS tracking configuration, it will be “converted” to an LFS resource. If working on a development branch, merge this commit into your main branch.
Let’s now take a look at Netlify.
5. Creating The Site On Netlify
At this point, your repository is set up, so you can go ahead and create an account on Netlify. You can even log in with your GitHub, GitLab or Bitbucket account if you like. Once on the dashboard, click the “New site from Git” button in the top right-hand corner, and create your new Netlify site.
Note: You can leave all the options at their default values for now.
Select your Git provider: this will open a pop-up window to authenticate you. When that is done, the window will close and you’ll see a list of repositories on that Git provider you have access to. Select your freshly created repo and continue. You’ll be asked a few things, most of which you can just leave by default as all the options are editable later on.
For now, in the Site Settings, click “Change site name” and name your site anything you want — I’ll go with chris-smashing-hugo-blog. We will now be able to access the site via chris-smashing-hugo-blog.netlify.com: a beautiful 404 page!
6. Preparing For Netlify Large Media (Optional)
If you set up Git LFS and plan on using Netlify, you’ll want to follow these steps. It’s a bit more convoluted but definitely worth it: it’ll enable you to set query strings on image URLs that will be automatically transformed.
Let’s say you have a link to portrait.jpg which is an image that’s 900×1600 pixels. With Netlify Large Media, you can call the file portrait.jpg?nf_resize=fit&w=420, which will proportionally scale it. If you define both w and h, and set nf_resize=smartcrop, it’ll resize by cropping to focus on the point of interest of the image (as determined by a fancy algorithm, a.k.a. robot brain magic!). I find this to be a great way to have thumbnails like the ones WordPress generates, without needing several files for an image on my repository.
If this sounds appealing to you, let’s set it up!
The first step is installing Netlify’s command-line interface (CLI) via npm:
npm install netlify-cli -g
If it worked, running the command netlify should result in info about the tool.
You’ll then need to make sure you are in your local repository folder (that I named “hugorepo” earlier), and execute:
Authorize the token. Next, we’ll have to install the Netlify Large Media plugin. Run:
There should be a command line shown at the end of the resulting message that you must copy (which should look like /Users/YOURNAME/.netlify/helper/path.bash.inc on Mac) — run it. Note that Keychain might ask you for your machine’s administrator password on macOS.
The next step is to link Netlify:
You can provide your site name here (I provided the chris-smashing-hugo-blog name I gave it earlier). With this in place, you just need to set up the Large Media feature by executing the following:
Commit these new changes to your local repository, and push them to the remote development branch. I had a few errors with Sourcetree and Keychain along the lines of git "credential-netlify" is not a git command. If that’s your case, try to manually push with these commands:
git add -A
git commit -m "Set up Netlify Large media"
brew tap netlify/git-credential-netlify
brew install git-credential-netlify
Try pushing your commit through now (either with your GUI or command terminal): it should work!
Note: If you change your Netlify password, runnetlify logoutandnetlify loginagain.
You might ask: “All this, and we still haven’t even initialized our Hugo build?” Yes, I know, it took a while but all the preparations for the transition are done. We can now get our Hugo blog set up!
7. Setting Up Hugo On Your Computer
You’ll first need to install Hugo on your computer with any of the provided options. I’ll be using Homebrew but Windows users can use Scoop or Chocolatey, or download a package directly.
brew install hugo
You’ll then need to create a new Hugo site but it won’t like setting it up in a non-empty folder. First option: you can create it in a new folder and move its contents to the local repository folder:
hugo new site your_temporary_folder
Second option: you can force it to install in your local repository with a flag, just make sure you’re running that in the right folder:
hugo new site . --force
You now have a Hugo site, which you can spin up with this command:
You’ll get a local preview on localhost. Sadly, you have no content and no theme of your own. Not to worry, we’ll get that set up really soon!
Let’s first have a look at the configuration file (config.toml in my case): let’s set up the blog’s name and base URL (this must match the URL on your Netlify dashboard):
title = "Chris’ Smashing Hugo Blog"
baseURL = "https://chris-smashing-hugo-blog.netlify.com"
This link will be overwritten while you develop locally, so you shouldn’t run into 404 errors.
Let’s give Hugo our exported articles in Markdown format. They should be sitting in the /WP2Hugo/blog2md/out folder from the first step. In the Hugo folder (a.k.a. the local repository directory), access the content folder and create a subfolder named posts. Place your Markdown files in there, and then let’s get a theme set up.
8. Creating Your Custom Theme
For this step, I recommend downloading the Saito boilerplate, which is a theme with all the partials you’ll need to get started (and no styles) — a very useful starting point. You could, of course, look at this collection of ready-made themes for Hugo if you want to skip over this part of the process. It’s all up to you!
From the local repository folder, clone the theme into themes/saito:
You can rename this folder to anything you want, such as cool-theme. You’ll have to tell your Hugo configuration which theme you want to use by editing your config.toml/yaml/json file. Edit the theme value to saito, or cool-theme, or whatever your theme’s folder name is. Your preview should now show your blog’s title along with a copyright line. It’s a start, right?
Open the theme’s layout/partials/home.html file and edit it to display your content, limiting to the five first items which are of type posts (inside the content/posts/ folder), with range, first and where:
range first 5 (where .Paginator.Pages "Type" "posts")
<article class="post post-- .Params.class ">
<h2 class="post__title"> .Title </h2>
Your content is now visible, in the most basic of ways. It’s time to make it yours — let’s dive in!
All operations in Hugo are defined inside delimiters: double curly braces (e.g. .Title ), which should feel familiar if you’ve done a bit of templating before. If you haven’t, think of it as a way to execute operations or inject values at a specific point in your markup. For blocks, they end with the end tag, for all operations aside from shortcodes.
Themes have a layout folder which contains the pieces of the layout. The _default folder will be Hugo’s starting point, baseof.html being (you guessed it!) the base of your layout. It will call each component, called “partials” (more on this on Hugo’s documentation about Partial Template), similar to how you would use include in PHP, which you may have already seen in your WordPress theme. Partials can call other partials — just don’t make it an infinite loop.
You can call a partial with partial "file.html" . syntax. The partial section is pretty straightforward, but the two other ones might need explaining. You might expect to have to write partials/file.html but since all partials are to be in the partials” folder, Hugo can find that folder just fine. Of course, you can create subfolders inside the “partials” folder if you need more organization.
You may have noticed a stray dot: this is the context you’re passing to your partial. If you had a menu partial, and a list of links and labels, you could pass that list into the partial so that it could only access to that list, and nothing else. I’ll talk more about this elusive dot in the next section.
Your baseof.html file is a shell that calls all the various partials needed to render your blog layout. It should have minimal HTML and lots of partials:
The block "main" . end line is different because it is a block that is defined with a template based on the content of the current page (homepage, single post page, etc.) with define "main" .
In your theme, create a folder named assets in which we will place a css folder. It will contain our SCSS files, or a trusty ol’ CSS file. Now, there should be a css.html file in the partials folder (which gets called by head.html). To convert Sass/SCSS to CSS, and minify the stylesheet, we would use this series of functions (using the Hugo Pipes syntax instead of wrapping the functions around each other):
$style := resources.Get "css/style.scss"
As a bonus — since I struggled to find a straight answer — if you want to use Autoprefixer, Hugo also implements PostCSS. You can add an extra pipe function between toCSS and minify on the first line, like so:
Create a “postcss.config.js” file at the root of your Hugo blog, and pass in the options, such as:
And presto! From Sass to prefixed, minified CSS. The “fingerprint” pipe function is to make sure the filename is unique, like style.c66e6096bdc14c2d3a737cff95b85ad89c99b9d1.min.css. If you change the stylesheet, the fingerprint changes, so the filename is different, and thus, you get an effective cache busting solution.
9. Notes On The Hugo Syntax
I want to make sure you understand “the Dot”, which is how Hugo scopes variables (or in my own words, provides a contextual reference) that you will be using in your templates.
The Dot And Scoping
The Dot is like a top-level variable that you can use in any template or shortcode, but its value is scoped to its context. The Dot’s value in a top-level template like baseof.html is different from the value inside loop blocks or with blocks.
Let’s say this is in our template in our head.html partial:
with .Site.Title . end
Even though we are running this in the main scope, the Dot’s value changes based on context, which is .Site.Title in this case. So, to print the value, you only need to write . instead of re-typing the variable name again. This confused me at first but you get used to it really quick, and it helps with reducing redundancy since you only name the variable once. If something doesn’t work, it’s usually because you’re trying to call a top-level variable inside a scoped block.
So how do you use the top-level scope inside a scoped block? Well, let’s say you want to check for one value but use another. You can use $ which will always be the top-level scope:
with .Site.Params.InfoEnglish $.Site.Params.DescriptionEnglish end
Inside our condition, the scope is .Site.Params.InfoEnglish but we can still access values outside of it with $, where intuitively using .Site.Params.DescriptionEnglish would not work because it would attempt to resolve to .Site.Params.InfoEnglish.Site.Params.DescriptionEnglish, throwing an error.
You can assign variables by using the following syntax:
$customvar := "custom value"
The variable name must start with $ and the assignment operator must be := if it’s the first time it’s being assigned, = otherwise like so:
$customvar = "updated value"
The problem you might run into is that this won’t transpire out of the scope, which brings me to my next point.
The Scratch functionality allows you to assign values that are available in all contexts. Say you have a list of movies in a movies.json file:
"name": "The Room",
"name": "Back to the Future",
"name": "The Artist",
Now, you want to iterate over the file’s contents and store your favorite one to use later. This is where Scratch comes into play:
.Scratch.Set "favouriteMovie" "None" /* Optional, just to get you to see the difference syntax based on the scope */
if ge .rating 10
/* We must use .Scratch prefixed with a $, because the scope is .Site.Data.movies, at the current index of the loop */
$.Scratch.Set "favouriteMovie" .name
My favourite movie is .Scratch.Get "favouriteMovie"
<!-- Expected output => My favourite movie is Back to the Future -->
With Scratch, we can extract a value from inside the loop and use it anywhere. As your theme gets more and more complex, you will probably find yourself reaching for Scratch.
Note: This is merely an example as this loop can be optimized to output this result without Scratch, but this should give you a better understanding of how it works.
If you’re as picky about the output as I am, you might notice some undesired blank lines. This is because Hugo will parse your markup as is, leaving blank lines around conditionals that were not met, for example.
Let’s say we have this hypothetical partial:
if eq .Site.LanguageCode "en-us"
<p>Welcome to my blog!</p>
<img src="/uploads/portrait.jpg" alt="Blog Author">
If the site’s language code is not en-us, this will be the HTML output (note the three empty lines before the image tag):
Hugo provides a syntax to address this with a hyphen beside the curly braces on the inside of the delimiter. - will trim the whitespace before the braces, and - will trim the whitespace after the braces. You can use either or both at the same time, but just make sure there is a space between the hyphen and the operation inside of the delimiter.
As such, if your template contains the following:
- if eq .Site.LanguageCode "en-us" -
<p>Welcome to my blog!</p>
- end -
<img src="/uploads/portrait.jpg" alt="Blog Author">
…then the markup will result in this (with no empty lines):
This can be helpful for other situations like elements with display: inline-block that should not have whitespace between them. Conversely, if you want to make sure each element is on its own line in the markup (e.g. in a range loop), you’ll have to carefully place your hyphens to avoid “greedy” whitespace trimming.
The example above would output the following if the site’s language code matches “en-us” (no more line breaks between the p and img tags):
<p>Welcome to my blog!</p><img src="/uploads/portrait.jpg" alt="Blog Author">
10. Content And Data
Your content is stored as Markdown files, but you can use HTML, too. Hugo will render it properly when building your site.
Your homepage will call the _default/list.html layout, which might look like this:
partial "list.html" .
The main block calls the list.html partial with the context of ., a.k.a. the top level. The list.html partial may look like this:
Now we have a basic list of our articles, which you can style as you wish! The number of articles per page is defined in the configuration file, with paginate = 5 (in TOML).
You might be utterly confused as I was by the date formatting in Hugo. Each time the unit is mapped out to a number (first month, second day, third hour, etc.) made a lot more sense to me once I saw the visual explanation below that the Go language documentation provides — which is kind of weird, but kind of smart, too!
Jan 2 15:04:05 2006 MST
=> 1 2 3 4 5 6 -7
Now all that’s left to do is to display your post on a single page. You can edit the post.html partial to customize your article’s layout:
If you’d like to customize the URL, update your configuration file by adding a [permalinks] option (TOML), which in this case will make the URLs look like my-blog.com/post-slug/:
posts = ":filename/"
If you want to generate an RSS feed of your content (because RSS is awesome), add the following in your site configuration file (Saito’s default template will display the appropriate tags in head.html if these options are detected):
But what if you had some sort of content outside of a post? That’s where data templates comes in: you can create JSON files and extract their data to create your menu or an element in your sidebar. YAML and TOML are also options but less readable with complex data (e.g. nested objects). You could, of course, set this in your site’s configuration file, but it is — to me — a bit less easy to navigate and less forgiving.
Let’s create a list of “cool sites” that you may want to show in your sidebar — with a link and a label for each site as an array in JSON:
You can save this file in your repository root, or your theme root, inside a data folder, such as /data/coolsites.json. Then, in your sidebar.html partial, you can iterate over it with range using .Site.Data.coolsites:
<li><a href=" .link "> .label </a></li>
This is very useful for any kind of custom data you want to iterate over. I used it to create a Google Fonts list for my theme, which categories the posts can be in, authors (with bio, avatar and homepage link), which menus to show and in which order. You can really do a lot with this, and it is pretty straightforward.
A final thought on data and such: anything you put in your Hugo /static folder will be available on the root (/) on the live build. The same goes for the theme folder.
11. Deploying On Netlify
So you’re done, or maybe you just want to see what kind of magic Netlify operates? Sounds good to me, as long as your local Hugo server doesn’t return an error.
Commit your changes and push them to your remote development branch (dev). Head over to Netlify next, and access your site’s settings. You will see an option for “Build & deploy”. We’re going to need to change a couple of things here.
First, in the “Build settings” section, make sure “Build command” is set to hugo and that “Publish directory” is set to public (the default that is recommended you keep on your Hugo config file);
Next, in the “Deploy contexts” section, set “Production branch” to your main branch in your repository. I also suggest your “Branch deploys” to be set to “Deploy only the production branch”;
Finally, in the “Environment variables” section, edit the variables and click “New variable”. We’re going to set the Hugo environment to 0.53 with the following pair: set key to HUGO_VERSION and value to 0.53.
Now head on over to your remote repository and merge your development branch into your main branch: this will be the hook that will deploy your updated blog (this can be customized but the default is reasonable to me).
Back to your Netlify dashboard, your site’s “Production deploys” should have some new activity. If everything went right, this should process and resolve to a “Published” label. Clicking the deploy item will open an overview with a log of the operations. Up top, you will see “Preview deploy”. Go on, click it — you deserve it. It’s alive!
12. Setting Up A Custom Domain
Having the URL as my-super-site.netlify.com isn’t to your taste, and you already own my-super-site.com? I get it. Let’s change that!
Head over to your domain registrar and go to your domain’s DNS settings. Here, you’ll have to create a new entry: you can either set an ALIAS/CNAME record that points to my-super-site.netlify.com, or set an A record that points your domain to Netlify’s load balancer, which is 188.8.131.52 at the time of writing.
When that’s done, head over to your site’s dashboard on Netlify and click “Domain settings”, where you’ll see “Add custom domain”. Enter your domain name to verify it.
You can also manage your domains via your dashboard in the Domains tab. The interface feels less confusing on this page, but maybe it will help make more sense of your DNS settings as it did for me.
Note: Netlify can also handle everything for you if you want to buy a domain through them. It’s easier but it’s an extra cost.
After you’ve set up your custom domain, in “Domain settings”, scroll down to the “HTTPS” section and enable the SSL/TLS certificate. It might take a few minutes but it will grant you a free certificate: your domain now runs on HTTPS.
13. Editing Content On Netlify CMS
If you want to edit your articles, upload images and change your blog settings like you’d do on WordPress’ back-end interface, you can use Netlify CMS which has a pretty good tutorial available. It’s a single file that will handle everything for you (and it is generator-agnostic: it will work with Jekyll, Eleventy, and so on).
You just need to upload two files in a folder:
the CMS (a single HTML file);
a config file (a YAML file).
The latter will hold all the settings of your particular site.
Go to your Hugo root’s /static folder and create a new folder which you will access via my-super-site.com/FOLDER_NAME (I will call mine admin). Inside this admin folder, create an index.html file by copying the markup provided by Netlify CMS:
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<!-- Include the script that builds the page and powers Netlify CMS -->
<script src="https://unpkg.com/[email protected]^2.0.0/dist/netlify-cms.js"></script>
The other file you’ll need to create is the configuration file: config.yml. It will allow you to define your site’s settings (name, URL, etc.) so that you can set up what your posts’ front matter should contain, as well as how your data files (if any) should be editable. It’s a bit more complex to set up, but that doesn’t mean it isn’t easy.
If you’re using GitHub or GitLab, start your config.yml file with:
branch: dev # Branch to update (optional; defaults to master)
If you’re using Bitbucket, it’s a bit different:
branch: dev # Branch to update (optional; defaults to master)
Then, for our uploads, we’ll have to tell the CMS where to store them:
media_folder: "static/images/uploads" # Media files will be stored in the repo under static/images/uploads
public_folder: "/images/uploads" # The src attribute for uploaded media will begin with /images/uploads
When you create a new post, the CMS will generate the slug for the filename which you can customize with three options:
encoding: "ascii" # You can also use "unicode" for non-Latin
clean_accents: true # Removes diacritics from characters like é or å
sanitize_replacement: "-" # Replace unsafe characters with this string
Finally, you’ll need to define how the data in your posts is structured. I will also define how the data file coolsites is structured — just in case I want to add another site to the list. These are set with the collections object which will definitely be the most verbose one, along with a nice handful of options you can read more about here.
- name: "articles" # Used in routes, e.g., /admin/collections/blog
label: "Articles" # Used in the Netlify CMS user interface
folder: "content/posts" # The path to the folder where the posts are stored, usually content/posts for Hugo
create: true # Allow users to create new documents in this collection
slug: "slug" # Filename template, e.g., post-title.md
fields: # The fields for each document, usually in front matter
- label: "Title", name: "title", widget: "string", required: true
- label: "Draft", name: "draft", widget: "boolean", default: true
- label: "Type", name: "type", widget: "hidden", default: "post"
- label: "Publish Date", name: "date", widget: "date", format: "YYYY-MM-DD"
- label: "Featured Image", name: "featuredimage", widget: "image"
- label: "Author", name: "author", widget: "string"
- label: "Body", name: "body", widget: "markdown"
- name: 'coolsites'
label: 'Cool Sites'
description: 'Website to check out'
- name: coolsites
- label: 'Site URL', name: 'link', widget: 'string', hint: 'https://…'
- label: 'Site Name', name: 'label', widget: 'string'
Note: You can read more about how to configure individual fields in the Netlify CMS Widgets documentation which goes over each type of widget and how to use them — especially useful for date formats.
The last thing we need to do is to ensure only authorized users can access the backend! Using your Git provider’s authentication is an easy way to go about this.
Head over to your Netlify site and click the “Settings” tab. Then go to “Access control” which is the last link in the menu on the left side. Here, you can configure OAuth to run via GitHub, GitLab or Bitbucket by providing a key and a secret value defined for your user account (not in the repository). You’ll want to use the same Git provider as the one your repo is saved on.
Go to your “Settings” page on GitHub (click your avatar to reveal the menu), and access “Developer Settings”. Click “Register a new application” and provide the required values:
a name, such as “Netlify CMS for my super blog”;
a homepage URL, the link to your Netlify site;
a description, if you feel like it;
the application callback URL, which must be “https://api.netlify.com/auth/done”.
Save, and you’ll see your Client ID and Client Secret. Provide them to Netlify’s Access Control.
Click your avatar to access the Settings page, and click “Applications” in the “User Settings” menu on the left. You’ll see a form to add a new application. Provide the following information:
a name, such as “Netlify CMS for my super blog”;
a redirect URI, which must be “https://api.netlify.com/auth/done”;
the scopes that should be checked are:
Saving your application will give you your Application ID and Secret, that you can now enter on Netlify’s Access Control.
Head over to your user account settings (click your avatar, then “Bitbucket settings”). Under “Access Management”, click “OAth”. In the “OAuth consumers” section, click “Add consumer”. You can leave most things at their default values except for these:
a name, such as “Netlify CMS for my super blog”;
a callback URL, which must be “https://api.netlify.com/auth/done”;
the permissions that should be checked are:
Account: Email, Read, Write
Repositories: Read, Write, Admin
Pull Requests: Read, Write
Webhooks: Read and write
After saving, you can access your key and secret, which you can then provide back on Netlify’s Access Control.
After providing the tokens, go to Netlify, and find the Site Settings. Head to “Identity” and enable the feature. You can now add an External Provider: select your Git provider and click on “Enable”.
You can now access your Netlify site’s backend and edit content. Every edit is a commit on your repo, in the branch specified in your configuration file. If you kept your main branch as the target for Netlify CMS, each time you save, it will run a new build. More convenient, but not as clean with “in-between states”.
Having it save on a dev branch allows you to have finer control on when you want to run a new build. This is especially important if your blog has a lot of content and requires a longer build time. Either way will work; it’s just a matter of how you want to run your blog.
Also, please note that Git LFS is something you installed locally, so images uploaded via Netlify CMS will be “normal”. If you pull in your remote branch locally, the images should be converted to LFS, which you can then commit and push to your remote branch. Also, Netlify CMS does currently not support LFS so the image will not be displayed in the CMS, but they will show up on your final build.
What a ride! In this tutorial, you’ve learned how to export your WordPress post to Markdown files, create a new repository, set up Git LFS, host a site on Netlify, generate a Hugo site, create your own theme and edit the content with Netlify CMS. Not too bad!
What’s next? Well, you could experiment with your Hugo setup and read more about the various tools Hugo offers — there are many that I didn’t cover for the sake of brevity.
Businesses always need to optimize their processes to make the operations more profitable. One factor you need to pay special attention to is your business’ utility consumption.
If you’re willing to put the effort, it is easy to reduce the utility consumption of your business. The problem is most companies think that it is not possible.
It is not true at all. And as proof, we’ll share with you 7 tips to lower the utility bills in your business.
Use solar power partially
One thing you can do is rely on solar power. While complete migration to solar energy might not be possible for your business, you can easily set up some solar panels and use solar power partially. It will ensure that you can reduce your electricity consumption by the same amount.
Most of the authorities and governments globally will provide you with tax credit when you set up solar panels and start using solar power. You can also make your business eco-friendly by doing so.
If you’re the only person making an effort to reduce your company’s utility consumption, you will have limited success. You have to educate your employees to save, too.
Once you do so, you will be amazed at the number of resources which you can save. It will also mean that you’ll be able to reduce your bills drastically.
As a way to encourage, you can provide your employees with added incentives or add savings to your bottom line. A team effort will help you succeed in saving big on utility costs.
Use Data analytics
These days, you have the option to use data analytics to understand your utility consumption. While it might require you to spend some money to set up the sensors and gather data, it will be easier to understand where you can save money.
You can put up not just electricity meters but also water meters to make it easy for you to understand the utility consumption. It will help you save up to 10% of your electricity and water consumption.
Hire an expert
If you are having a hard time reducing your utility bills, it is a good idea to hire experts to help you out. Energy audit experts can help you understand your utility consumption. They can also give you recommendations in writing after conducting an audit. You just have to follow their recommendations.
Do not ignore the sunlight
One of the most simple tips to lower your energy consumption is to use sunlight during daytime. Instead of always having curtains and blinds drawn, it is a good idea to open them a bit during the day. It will ensure that your office has plenty of sunlight.
As the sunlight increases in your office, the consumption of electricity will go down during the daytime. You will not have to rely on artificial light.
Plug the leaks
During the summer months, if there is any leak from your office, you will not be able to maintain your office at a comfortable temperature. The air conditioner will consume more electricity to keep the desired temperature. So, plug in the leaks after checking your windows and doors.
On the other hand, a leak or if any of the pipes in your office are broken, it will result in excessive consumption of water. The excessive use will once again increase your water bill. You have to find that leak and fix it.
Use smart power strips
These days, small and big businesses rely on gadgets to make their employees more efficient. One of the most common accessories in any office is a power strip.
The problem is that if the power strip is not smart, it will leave the gadgets running throughout the day and probably throughout the night as well.
This can drive your electricity bills through the roof.
A much better solution is to opt for smart power strips. You can program them to turn off automatically during the night.
Instruct your employees to shut down their devices before leaving the office. Both of these measures will make it easy for you to reduce electricity consumption by a significant amount.
These seven tips are the easiest ways for you to lower utility bills in your business. Following them can help you quickly increase your bottom line and ensure that you can have a smaller carbon footprint as well.